Use of the BridgeDb metabolite ID mapping database in PathVisio

A long time ago Martijn van Iersel wrote a PathVisio plugin that visualizes 2D chemical structures of metabolites in pathways as found on WikiPathways. Some time ago I tried to update it to a more recent CDK version, but did not have enough time at the time to get it going. However, John May's helpful DepictionGenerator made it a lot easier, so I set out this morning in updating the code base to use this class and CDK 1.5.13 (well, strictly speaking it's running a prerelease (snapshot) of CDK 1.5.14). With success:


The released version is a bit more tweaked and shows the 2D structure diagram more filling the Structure tab. I have submitted the plugin to the PathVisio Plugin Repository.

Now, you may know that these GPML pathways only contain identifiers, and no chemical structures. But this is where the metabolite identifier mapping database helps (doi:10.6084/m9.figshare.3413668.v1): it contains SMILES strings for many of the compounds. It does not contains SMILES string from Wikidata, but I will start adding those in upcoming releases too. The current SMILES strings come from HMDB.

To show how all this works, check out the below PathVisio screenshot. The selected node in the pathway has a label uracil and the left most front dialog was used to search in the metabolite identifier mapping database and it found many hits in HMDB and Wikidata (middle dialog). The Wikidata identifier was chosen for the data node, allowing PathVisio to "interpret" the biological nature of that node in the pathway. However, along with many mapped identifiers (see the Backpage on the right), this also provides a SMILES that is used by the updated ChemPaint plugin.


Setting up a local SPARQL endpoint

... has never been easier, and I have to say, with Virtuoso it already was easy.

Step 1: download the jar and fire up the server
OK, you do need Java installed, and for many this is still the case, despite Oracle doing their very best to totally ruin it for everyone. But seriously, visit the Blazegraph website (@blazegraph) and download the jar and type:

$ java -jar blazegraph.jar

It will give some output on the console, including a webpage with SPARQL endpoint, upload form etc.


That it tracks past queries is a nice extra.

Step 2: there is no step two

Step 3: OK, OK, you also want to try a SPARQL from the command line
Now, I have to say, the webpage does not have a "Download CSV" button on the SPARQL endpoint. That would be great, but doing so from the command line is not too hard either.

$ curl -i -H "Accept: text/csv" --data-urlencode \
  query@list.rq http://192.168.0.233:9999/blazegraph/sparql

But it would be nice if you would not have to copy/paste the query into a file, or go to the command line in the first place. Also, I had some trouble finding the correct SPARQL endpoint URL, as it seems to have changed at least twice in recent history, given the (outdated) documentation I found online (common problem; no complaint!).

HT to Andra who first mentioned Blazegraph to me, and the Blazegraph team.

Metabolomics 2016 Write up #1: some interesting bits

A good conference needs some time to digest. A previous supervisor advised me that a conference travel of 5 days takes 5 full day to follow up on everything. I think he is right, though few of us actually block our schedules to make time for that. Anyway, I started following up on things last weekend, resulting in a first two blog posts:
The second was pretty much how I have been blogging a lot: it's my electronic lab notebook. The first is about how people can link out to WikiPathways. That post explains how people can create links between identifiers and pathways.

But there was a lot of very interesting stuff at Metabolomics 2016. I hope to be blogging about more things, but please find some initial coverage in the slides of a presentation I gave yesterday at our department:

Also check the Twitter hashtag #metsocdublin2016.

Harmonized identifiers in the WikiPathways RDF

Biological knowledge should not only be captured
in nice graphics, but should be machine readable.
Public domain image from Wikipedia.
WikiPathways described biological processes. Entities in these processes are genes, gene products, like miRNAs, proteins, and metabolites. The pathways do not describe what these entities are, but only provide identifiers in external databases allowing you to study the identity in those databases. Therefore, for metabolites you will not find chemical graphs but identifiers from HDMB, CAS, KEGG, ChEBI, and others.

To ensure experimental data can be mapped to these pathways, independent of whatever identifiers are used, BridgeDb was developed. WikiPathways uses a BridgeDb webservice, Open PHACTS embeds BridgeDb technologies in their Identifier Mapping Service (particularly developed by Carole Goble's team), and PathVisio uses local BridgeD ID mapping files.

The WikiPathways SPARQL end point is not using the Open PHACTS IMS and Andra introduced harmonized identifiers and provides these as additional triples in the WikiPathways RDF. For example:

SELECT DISTINCT ?gene fn:substring(?ensId,32) as ?ensembl
WHERE {
  ?gene a wp:GeneProduct ;
    wp:bdbEnsembl ?ensId .
}

Now, the gene resource IRIs actually use the Ensembl identifier when available, so this query returns redundant information, but there are other harmonized identifiers available:

SELECT DISTINCT ?type ?pred
WHERE {
  ?entity a ?type; ?pred [] .
  FILTER (regex(?pred,'bdb'))
}

That results in a table like this:


Therefore, for these databases it is easy to make links between those identifiers and the pathways in which entities with those identifiers are found. For example, to create a link between Ensembl identifiers and pathways, we could do something like:

SELECT DISTINCT
  ?pathwayRes str(?wpid) as ?pathway
  str(?title) as ?pathwayTitle
  fn:substring(?ensId,32) as ?ensembl
WHERE {
  ?gene a wp:GeneProduct ;
    dcterms:identifier ?id ;
    dcterms:isPartOf ?pathwayRes ;
    wp:bdbEnsembl ?ensId .
  ?pathwayRes a wp:Pathway ;
    dcterms:identifier ?wpid ;
    dc:title ?title .
}

I am collecting a number of those queries in the WikiPathways help wiki's page with many example SPARQL queries. For example, check out the federated SPARQL queries listed there.

Two Apache Jena SPARQL query performance observations

Doing searches in RDF stores is commonly done with SPARQL queries. I have been using this with the semantic web translation of WikiPathways by Andra to find common content issues, though sometimes combined with some additional Java code. For example, find PubMed identifiers that are not numbers.

Based on Ryan's work on interactions, a more complex curation query I recently wrote in reply to issues that Alex ran into with converting pathways to BioPax, is to find interactions that convert a gene to another gene. Such occurred in WikiPathways because graphically you do not see the difference. I originally had this query:

SELECT (str(?organismName) as ?organism) ?page
       ?gene1 ?gene2 ?interaction
WHERE {
  ?gene1 a wp:GeneProduct .
  ?gene2 a wp:GeneProduct .
  ?interaction wp:source ?gene1 ;
    wp:target ?gene2 ;
    a wp:Conversion ;
    dcterms:isPartOf ?pathway .
  ?pathway foaf:page ?page ;
    wp:organismName ?organismName .
} ORDER BY ASC(?organism)

This query properly found all gene-gene conversions to be fixed. However, it was also horribly slow with my JUnit/Apache Jena set up. The queries runs very efficiently on the Virtuoso-based SPARQL end point. I had been trying to speed it up in the past, but without much success. Instead, I ended up batching the testing on our Jenkins instance. But this got a bit silly, with at some point subsets of less than 100 pathways.

Observation #1
So, I turned to twitter, and quite soon got three useful leads. The first two suggestions did not help, but helped me rule out the problem. Of course, there is literature about optimizing, like this recent paper by Antonis (doi:10.1016/j.websem.2014.11.003), but I haven't been able to convert this knowledge into practical steps either. After ruling out these options (though I kept the sameTerm() suggestion), and realized it had to be the first two triples with the variables ?gene1 and ?gene2. So, I tried using FILTER there too, resulting with this query:

WHERE {
  ?interaction wp:source ?gene1 ;
    wp:target ?gene2 ;
    a wp:Conversion ;
    dcterms:isPartOf ?pathway .
  ?pathway foaf:page ?page ;
    wp:organismName ?organismName .
  FILTER (!sameTerm(?gene1, ?gene2))
  FILTER (?gene1 a wp:GeneProduct)
  FILTER (?gene2 a wp:GeneProduct)
} ORDER BY ASC(?organism)

That did it! The time to run a query halved. Not so surprising, in retrospect, but it all depends on the SPARQL engine: which parts does it run first. Apparently, Jena's SPARQL engine starts at the top. This seems to be confirmed by the third comment I got. However, I always understood engine can also start at the bottom.

Observation #2
But that's not all. This speed up made me wonder something else. The problem clearly seems to engine approach to run parts of the query. So, what if I remove further choices in what to run first? That leads me to a second observation. It helps significantly if you reduce the number of subgraphs it should later "merge". Instead, if possible, use property paths. That again, about halved the runtime of the query. I ended up with the below query, which, obviously, no longer give me access to the pathway resources, but I can live with that:

WHERE {
  ?interaction wp:source ?gene1 ;
    wp:target ?gene2 ;
    a wp:Conversion ;
    dcterms:isPartOf/foaf:page ?pathway ;
    dcterms:isPartOf/wp:organismName ?organismName .
  FILTER (!sameTerm(?gene1, ?gene2))
  FILTER EXISTS {?gene1 a wp:GeneProduct}
  FILTER EXISTS {?gene2 a wp:GeneProduct}
} ORDER BY ASC(?organism)

I'm hoping these two observations may help other with using Apache Jena with unit and integrated testing of RDF generation too.

Loizou, A., Angles, R., Groth, P., Mar. 2015. On the formulation of performant SPARQL queries. Web Semantics: Science, Services and Agents on the World Wide Web 31, 1-26. http://dx.doi.org/10.1016/j.websem.2014.11.003

New Paper: "Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources"


Andra Waagmeester published a paper on his work on a semantic web version of the WikiPathways (doi:10.1371/journal.pcbi.1004989). The paper outlines the design decisions, shows the SPARQL endpoint, and several examples SPARQL queries. These include federates queries, like a mashup with DisGeNET (doi:10.1093/database/bav028) and EMBL-EBI's Expression Atlas. That results in nice visualisations like this:


If you have the relevant information in the pathway, these pathways can help a lot in helping understanding of what is biologically going on. And, of course, used for exactly that a lot.

Press release
Because press releases have become an interesting tool in knowledge dissemination, I wanted to learn what it involved to get one out. This involved the people as PLOS Computational Biology and the press offices of the Gladstone Institutes and our Maastricht University (press release 1, press release 2 EN/NL). There is already one thing I learned in retrospect, and I am pissed with myself that I did not think of this: you should always have a graphics supporting your story. I have been doing this for a long time in my blog now (sometimes I still forget), but did not think of that in the press release. The press release was picked up by three outlets, though all basically as we presented it to them (thanks to Altmetric.com):


SPARQL
But what makes me appreciate this piece of work, and WikiPathways itself, is how it creates a central hub of biological knowledge. Pathway databases capture knowledge not easily embedded an generally structured (relational) databases. As such, expression this in the RDF format seems simple enough. The thing I really love about this approach, is that your queries become machine readable stories, particularly when you start using human readable variants of SPARQL for this. And you can share these queries with the online scientific community with, for example, myExperiment.

There are two applications how I have used SPARQL on WikiPathways data for metabolomics: 1. curation; 2. statistics. Data analysis is harder, because in the RDF world resources scientific lenses are needed to accommodate for the chemical structural-temporal complexity of metabolites. For curation, we have long used SPARQL for unit tests to support the curation of WikiPathways. Moreover, I have manually used the SPARQL end point to find curation tasks. But now that the paper is out, I can blog about this more. For now, many examples SPARQL queries can be found in the WikiPathways wiki. It features several queries showing statistics, but also some for curation. This is an example query I use to improve the interoperability of WikiPathways with Wikidata (also for BridgeDb):

SELECT DISTINCT ?metabolite WHERE {
  ?metabolite a wp:Metabolite .
  OPTIONAL { ?metabolite wp:bdbWikidata ?wikidata . }
  FILTER (!BOUND(?wikidata))
}

Feel free to give this query a go at sparql.wikipathways.org!

Triptych
This papers completes a nice triptych of three papers about WikiPathways in the past 6 months. Thanks to whole community and the very many contributors! All three papers are linked below.

Waagmeester, A., Kutmon, M., Riutta, A., Miller, R., Willighagen, E. L., Evelo, C. T., Pico, A. R., Jun. 2016. Using the semantic web for rapid integration of WikiPathways with other biological online data resources. PLoS Comput Biol 12 (6), e1004989+. http://dx.doi.org/10.1371/journal.pcbi.1004989
Bohler, A., Wu, G., Kutmon, M., Pradhana, L. A., Coort, S. L., Hanspers, K., Haw, R., Pico, A. R., Evelo, C. T., May 2016. Reactome from a WikiPathways perspective. PLoS Comput Biol 12 (5), e1004941+. http://dx.doi.org/10.1371/journal.pcbi.1004941
Kutmon, M., Riutta, A., Nunes, N., Hanspers, K., Willighagen, E. L., Bohler, A., Mélius, J., Waagmeester, A., Sinha, S. R., Miller, R., Coort, S. L., Cirillo, E., Smeets, B., Evelo, C. T., Pico, A. R., Jan. 2016. WikiPathways: capturing the full diversity of pathway knowledge. Nucleic Acids Research 44 (D1), D488-D494. http://dx.doi.org/10.1093/nar/gkv1024

Wikidata showing chemical properties with references

As you have seen in my blog, I'm a fan of Wikidata. Because of the Open nature, it's creating an enormous eco-system, in which many scientists are involved and with innovative visualizations. Data comes from many trusted databases, but the complexity of it all requires some hard decisions now and then. However, unlike many other databases, Wikidata has data provenance high on the agenda: all statements can be complemented with primary literature references, which I have been using when porting the pKa data.

SQID page for aspirin in Wikidata.
A new visualization of the data is provided by SQID Markus Krötzsch et al. And this interface propagates the references for each bit of fact, though by default hidden behind an arrow icon at the top right of the fact. Clicking that will show the provenance, though that is currently still often a database, rather than primary literature.


Section of the SQID page for aspirin, with references given for solubility, mass,
and a hazardous chemical exposure.
I really like where this is going! Why have publishers not been able to do something like this in the past 20 years?? This is knowledge dissemination as we want to see it.

Comparing sets of identifiers: the Bioclipse implementation

Source: Wikipedia
The problem
That sounds easy: take two collection of identifiers, put them in sets, determine the intersection, done. Sadly, each collection uses identifiers from different databases. Worse, within one set identifiers from multiple databases. Mind you, I'm not going full monty, though some chemistry will be involved at some point. Instead, this post is really based on identifiers.

The example
Data set 1:

Data set 2: all metabolites from WikiPathways. This set has many different data sources, and seven provide more than 100 unique identifiers. The full list of metabolite identifiers is here.

The goal
Determine the interaction of two collections of identifiers from arbitrary databases, ultimately using scientific lenses. I will develop at least two solutions: one based on Bioclipse (this post) and one based on R (later).

Needs
First of all, we need something that links IDs in the first place. Not surprisingly, I will be using BridgeDb (doi:10.1186/1471-2105-11-5) for that, but for small molecules alternatives exist, like the Open PHACTS IMS based on BridgeDb, the Chemical Translation Service (doi:10.1093/bioinformatics/btq476) or UniChem (doi:10.1186/s13321-014-0043-5, doi:10.1186/1758-2946-5-3).

The Bioclipse implementation
The first thing we need to do is read the files. I have them saved as CSV even though it is a tab-separated file. Bioclipse will now open it in it's matrix editor (yes, I think .tsv needs to be linked to that editor, which does not seem to be the case yet). Reading the human metabolites from WikiPathways is done with this code (using Groovy as scripting language):

file1 = new File(
  bioclipse.fullPath(
    "/Compare Identifiers/human_metabolite_identifiers.csv"
  )
)
set1 = new java.util.HashSet();
file1.eachLine { line ->
  fields = line.split(/\t/)
  def syscode;
  def id;
  if (fields.size() >= 2) {
    (syscode, id) = line.split(/\t/)
  }
  if (syscode != "syscode") { // ok, not the first line
    set1.add(bridgedb.xref(id, syscode))
  }
}

You can see that I am using the BridgeDb functionality already, to create Xref objects. The code skips the first line (or any line with "column headers"). The BridgeDb Xref object's equals() method ensures I only have unique cross references in the resulting set.

Reading the other identifier set is a bit trickier. First, I manually changed the second column, to use the BridgeDb system codes. The list is short, and saves me from making mappings in the source code. One thing I decide to do in the source code is normalize the ChEBI identifiers (something that many of you will recognize):

file2 = new File(
  bioclipse.fullPath("/Compare Identifiers/set.csv")
)
set2 = new java.util.HashSet();
file2.eachLine { line ->
  fields = line.split(/\t/)
  def name;
  def syscode;
  def id;
  if (fields.size() >= 3) {
    (name, syscode, id) = line.split(/\t/)
  }
  if (syscode != "syscode") { // ok, not the first line
    if (syscode == "Ce") {
      if (!id.startsWith("CHEBI:")) {
        id = "CHEBI:" + id
      } 
    }
    set2.add(bridgedb.xref(id, syscode))
  }
}

Then, the naive approach that does not take into account identifier equivalence makes it easy to list the number of identifiers in both sets:

intersection = new java.util.HashSet();
intersection.addAll(set1);
intersection.retainAll(set2)

println "set1: " + set1.size()
println "set2: " + set2.size()
println "intersection: " + intersection.size()

This reports:

set1: 2584
set2: 6
intersection: 3

With the following identifiers in common:

[Ce:CHEBI:30089, Ce:CHEBI:15904, Ca:25513-46-6]

Of course, we want to use the identifier mapping itself. So, we first compare identifiers directly, and if not matching, use BridgeDb and an metabolite identifier mapping database (get one here):

mbMapper = bridgedb.loadRelationalDatabase(
  bioclipse.fullPath(
    "/VOC/hmdb_chebi_wikidata_metabolites.bridge"
  )
)

intersection = new java.util.HashSet();
for (id2 in set2) {
  if (set1.contains(id2)) {
    // OK, direct match
    intersection.add(id2)
  } else {
    mappings = bridgedb.map(mbMapper, id2)
    for (mapped in mappings) {
      if (set1.contains(mapped)) {
        // OK, direct match
        intersection.add(id2)
      }
    }
  }
}

This gives five matches:

[Ch:HMDB00042, Cs:5775, Ce:CHEBI:15904, Ca:25513-46-6, Ce:CHEBI:30089]

The only metabolite it did not find in any pathway is the KEGG identified metabolite, homocystine. I just added this compound to Wikidata. That means that in the next metabolite mapping database, it will recognize this compound too.

The R and JavaScript implementations
I will soon write up the R version in a follow up post (but got to finish grading student reports first).

Sci-Hub succeeds where publishers fail (open and closed)

Sci-Hub use in The Netherlands is not limited to
the academic research cities. Harlingen is a small
harbor town where at best a doctor lives and
one or two students who visit parents in the
weekend. The nature of the top downloaded
paper suggests it is not a doctor :)
Data from Bohannon and Elbakyan.
Knowledge dissemination is a thing. It's not easy. In fact, it's a major challenge. Traditional routes are not efficient anymore, where they were 200 years ago. The world has moved on; the publishing industry has not. I have written plenty in this blog about how the publishers could catch up, and while this is happening, progress is (too) slow.

The changes are not only technical, but also social. Several publishers still believe we live in a industrial area, where the world has moved on into a knowledge era. More people are mining and servicing data than there are making physical things (think about that!). Access to knowledge matters, and dealing with data and knowledge stopped being something specific for academic and other research institutes many, many years ago. Arguments that knowledge is only for the highly educated is simply contradicting and bluntly ignore our modern civilization.

This makes access to knowledge a mix of technological and social evolution, and on both end many publishers fail, fail hard, fail repeatedly. I would even argue that all the new publishers are improving things, but are failing to really innovate in knowledge dissemination. And not just the publishing industry, also many scientists. Preprint servers are helpful, but this is really not the end goal. If you really care about speeding up knowledge dissemination, stop worrying about things like text mining, preprints, but you have to start making knowledge machine readable (sorry, scientist) and release that along or before your article. Yes, that is harder, but just realize you are getting well-paid for doing your job.

So, by no means the success of Sci-Hub is unexpected. It is not really the end goal I have in mind, and in many ways contradicting what I want. But the research community thinks differently, clearly. Oh wait, not just the research community, but the current civilization. The results of the Bohannon analysis of the Sci-Hub access logs I just linked to clearly shows this. There are so many aspects, and so many interpretations and remaining questions. The article rightfully asks, is it need or convenience. I argued recently the latter is likely an important reason at western universities, and that it is nothing new.

This article is a must read if you care about the future of civilization. Bonus points for a citable data set!

Bohannon, J. Who's downloading pirated papers? everyone. Science 352, 508-512 (2016). URL http://dx.doi.org/10.1126/science.352.6285.508.
Elbakyan, A. & Bohannon, J. Data from: Who's downloading pirated papers? everyone. (2016). URL http://dx.doi.org/10.5061/dryad.q447c.

Programming in the Life Sciences #22: jsFiddle

My son pointed me to jsFiddle which allows you to edit JavaScript snippets and run them. I have heard of them before, but never really got time for it. But I'm genuinely impressed with the stuff he is doing, and finally wanted to try sharing JavaScript snippets online, particularly, because I had to update the course description of Programming in the Life Sciences. In this course the students work with JavaScript and there are a number of example, but that has a lot of HTML boiler plate code.

So, here's the first of those examples, but then stripped from most of the things you don't need, and with some extra documentation as comments: