Import data from CSV files into Jira

Pascal Robert
3 min readApr 25, 2020

Since Jira is Java-based, you can use any Type IV JDBC drivers with it. Jira itself supports MySQL, Oracle Database, PostgreSQL and MS SQL Server, but you can still install any other JDBC driver so that ScriptRunner code or an in-house addon that query that database. And one of these databases can actually be… CSV files.

This is something I had to use in the past, and it might be useful for other Jira administrators. Let’s say you want to import data from a CSV file every night into a couple of custom fields. Let’s do this by creating a service with ScriptRunner that will import the data at 1:00am every day of the week.

First, you need to install the JDBC driver, who can be downloaded from SourceForge, into the lib in the Jira installation (on Linux, the default path would be /opt/atlassian/jira/lib/). This is the same directory where the JDBC driver for the Jira database is located. Restart Jira after the driver is on server.

Next, put your CSV files into your JIRA_HOME (you can get the location from Jira -> System -> System Information). It might be a good idea to create a csv directory in JIRA_HOME to store the files.

The CSV we are going to use for this tutorial is quite simple:

Header 1;Header 2;
Column 1, line 1;Column 2, line 1;
Column 1, line 2;Column 2, line 2;
Column 1, line 3;Column 2, line 2;
Column 1, line 4;Column 2, line 2;
Column 1, line 5;Column 2, line 2;
Column 1, line 6;Column 2, line 2;

Let’s write the script. We are going to import the data into two, single select, fields, called External Data 1 and External Data 2.

The code:

import groovy.sql.Sql
import java.sql.Driver
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.util.JiraHome
import com.atlassian.jira.issue.customfields.option.Option
def cfm = ComponentAccessor.getCustomFieldManager()
def externalDataField1 = cfm.getCustomFieldObjectsByName("External Data 1").getAt(0)
def externalDataField2 = cfm.getCustomFieldObjectsByName("External Data 2").getAt(0)
def om = ComponentAccessor.getOptionsManager()
def issue = ComponentAccessor.getIssueManager().getIssueByCurrentKey("COLINFRA-23")
def issueContext = new com.atlassian.jira.issue.context.IssueContextImpl(issue.getProjectId(), issue.getIssueTypeId());
def fieldConfig1 = externalDataField1.getRelevantConfig(issueContext);
def fieldConfig2 = externalDataField2.getRelevantConfig(issueContext);
def optionsField1 = om.getOptions(fieldConfig1)
def optionsField2 = om.getOptions(fieldConfig2)
def driver = Class.forName('org.relique.jdbc.csv.CsvDriver').newInstance() as Driverdef pathToFiles = ComponentAccessor.getComponentOfType(JiraHome.class).getHome().getAbsolutePath() + File.separator + "csv"def conn = driver.connect("jdbc:relique:csv:" + pathToFiles + "?separator=;", new Properties())
def sql = new Sql(conn)
def newOptions = [] as ArrayList<Option>
def iteratorCount = 1
try {
sql.eachRow("select * from test") { row ->
newOptions.add(om.createOption(fieldConfig1, 0, iteratorCount, (String)row.getAt(0)))
newOptions.add(om.createOption(fieldConfig2, 0, iteratorCount, (String)row.getAt(1)))
iteratorCount++
}
} finally {
sql.close()
conn.close()
}
om.updateOptions(newOptions)

That’s it! By default the driver will see the first line as the column names for the “database”, so no need to ignore that line. We simply iterate over the CSV file, read the two columns, create a new option (value) in each custom field and update the list of options at the end.

If you rerun the script, it will add the same values again. The way to avoid this is to either compare existing values with the new ones, and add them only if they are new, or delete existing options (values) and add all of them. Comparing is a better method, because you might have users or other process who are using the fields while you are wiping and adding the data, and users might wonder what is going on.

The script have this line:

def issue = ComponentAccessor.getIssueManager().getIssueByCurrentKey("COLINFRA-23")

Since custom field might have contextes (different options based on project or issue type), you need to fetch an issue with the correct type and/or project to fetch the correct context.

If you wish to run this code as a service, somewhat like a cron job, add the code in a file that will be store on your server in JIRA_HOME/scripts, and follow those instructions.

Have fun!

--

--

Pascal Robert

Project management, consulting, Web development, databases, system administrator, business analyst, I have done it all.