vignettes/articles/eurlexpkg.Rmd
eurlexpkg.Rmd
This vignette shows how to use the eurlex
R package to
retrieve data on European Union law.
Dozens of political scientists and legal scholars use data on
European Union laws in their research. The provenance of these data is
rarely discussed. More often than not, researchers resort to the quick
and dirty technique of scraping entire html pages from
eur-lex.europa.eu
. This is not the optimal, nor preferred
(from the perspective of the server host) approach of retrieving data,
however, especially as the Publication Office of the European Union, the
public body behind Eur-Lex, operates several dedicated APIs for
automated retrieval of its data.
The allure of web scraping is completely understandable. Not only is it easier to download data that can be readily seen in a user-friendly manner through a browser, using the dedicated APIs requires technical knowledge of semantic web and Client URL technologies, which is not necessarily widespread among researchers. And why go through the pain of learning how to compile SPARQL queries when it is much easier to simply download the web page?
The eurlex
R package attempts to significantly reduce
the overhead associated with using the SPARQL and REST APIs made
available by the EU Publication Office. Although at present it does not
offer access to the same array of information as comprehensive web
scraping might, the package provides simpler, more efficient and
transparent access to data on European Union law. This vignette gives a
quick guide to the package and an even quicker introduction to the
Eur-Lex dataverse.
eurlex
package
The eurlex
package currently envisions the typical
use-case to consist of getting bulk information about EU law and policy
into R as fast as possible. The package contains three core functions to
achieve that objective: elx_make_query()
to create SPARQL
queries based on user input; elx_run_query()
to execute the
pre-made or any other manually input query; and
elx_fetch_data()
to fire GET requests for certain metadata
to the REST API.
The package also contains largely self-explanatory functions for
retrieving data on EU court cases (elx_curia_list()
) and
Council votes (elx_council_votes()
, currently
dysfunctional) from outside Eur-Lex. More advanced users might be
interested in downloading and custom-parsing XML notices with
elx_download_xml()
.
elx_make_query()
: Generate SPARQL queries
The function elx_make_query
takes as its first argument
the type of resource to be retrieved from the semantic database that
powers Eur-Lex (and other publications) called Cellar.
library(eurlex)
library(dplyr)
query_dir <- elx_make_query(resource_type = "directive")
Currently, it is possible to choose from among a host of resource types, including directives, regulations and even case law (see function description for the full list). It is also possible to manually specify a resource type from the eligible list.1
The choice of resource type is then reflected in the SPARQL query generated by the function:
query_dir %>%
cat()
elx_make_query(resource_type = "caselaw") %>%
cat()
elx_make_query(resource_type = "manual", manual_type = "SWD") %>%
cat()
There are various ways of querying the same information in the Cellar database due to the existence of several overlapping classes and identifiers describing the same resources. The queries generated by the function should offer a reliable way of obtaining exhaustive results, as they have been validated by the helpdesk of the Publication Office. At the same time, it is always possible there will be issues either on the query or the database side; please report any you encounter through Github.
The other arguments in elx_make_query()
relate to
additional metadata to be returned. The results include by default the
CELEX
number and exclude corrigenda (corrections of errors in
legislation). Other data needs to be opted into. Make sure to select
ones that are logically compatible (e.g. case law does not have a legal
basis). More options should be added in the future.
Note that availability of data for each variable might have an impact on the results. The data frame returned by the query might be shrunken to the size of the variable with most missing data. It is recommended to always compare results from a desired query to a minimal query requesting only celex ids.
elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
cat()
# minimal query: elx_make_query(resource_type = "directive")
elx_make_query(resource_type = "recommendation", include_date = TRUE, include_lbs = TRUE) %>%
cat()
# minimal query: elx_make_query(resource_type = "recommendation")
You can also decide to not specify any resource types, in which case all types of documents will be returned. As there are over a million documents with a CELEX identifier, this is likely not efficient for a majority of users. But since version 0.3.5 it is possible to request documents belonging to a particular “sector” or directory code.
# request documents from directory 18 ("Common Foreign and Security Policy")
# and sector 3 ("Legal acts")
elx_make_query(resource_type = "any",
directory = "18",
sector = 3) %>%
cat()
Now that we have a query, we are ready to run it.
elx_run_query()
: Execute SPARQL queries
elx_run_query()
sends SPARQL queries to a pre-specified
endpoint. The function takes the query string as the main argument,
which means you can manually pass it any working SPARQL query (relevant
to official EU publications).
results <- elx_run_query(query = query_dir)
# the functions are compatible with piping
#
# elx_make_query("directive") %>%
# elx_run_query()
as_tibble(results)
The function outputs a data.frame
where each column
corresponds to one of the requested variables, while the rows accumulate
observations of the resource type satisfying the query criteria.
Obviously, the more data is to be returned, the longer the execution
time, varying from a few seconds to several minutes, depending also on
your connection.
The first column always contains the unique URI of a “work” (legislative act or court judgment) which identifies each resource in Cellar. Several human-readable identifiers are normally associated with each “work” but the most useful one is CELEX, retrieved by default.2
One column you should always pay attention to is type
(as in resource_type
). The URIs contained there reflect the
FILTER argument in the SPARQL query, which is manually pre-specified.
All resources are indexed as being of one type or another. For example,
when retrieving directives, the results are going to return also
delegated directives, which might not be desirable, depending on your
needs. You can filter results by type
to make the necessary
adjustments. The queries are expansive by default in the spirit of
erring on the side of over-inclusiveness rather than vice versa.
The data is returned in the long format, which means that rows are
recycled up to the length of the variable with the most data points. For
example, if 20 directives are returned, each with two legal bases, the
resulting data.frame
will have 40 rows. Some variables,
such as dates, contain unexpectedly several entries for some documents.
You should always check the number of unique identifiers in the results
instead of assuming that each row is a unique observation.
EuroVoc is a multilingual thesaurus, keywords from which are used to describe the content of European Union documents. Most resource types that can be retrieved with the pre-defined queries in this package can be accompanied by EuroVoc keywords and these can be retrieved as other variables.
rec_eurovoc <- elx_make_query("recommendation", include_eurovoc = TRUE, limit = 10) %>%
elx_run_query() # truncated results for sake of the example
rec_eurovoc %>%
select(celex, eurovoc)
By default, the endpoint returns the EuroVoc concept codes rather
than the labels (keywords). The function
elx_label_eurovoc()
needs to be called to obtain a look-up
table with the labels.
eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc)
print(eurovoc_lookup)
The results include labels only for unique identifiers, but with
dplyr::left_join()
it is straightforward to append the
labels to the entire dataset.
As elsewhere in the API, we can tap into the multilingual nature of EU documents also when it comes to the EuroVoc keywords. Moreover, most concepts in the thesaurus are associated with alternative labels; these can be returned as well (separated by a comma).
eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc,
alt_labels = TRUE,
language = "sk")
rec_eurovoc %>%
left_join(eurovoc_lookup) %>%
select(celex, eurovoc, labels)
elx_fetch_data()
: Fire GET requests
A core contribution of the SPARQL requests is that we obtain a
comprehensive list of identifiers that we can subsequently use to obtain
more data relating to the document in question. While the results of the
SPARQL queries are useful also for webscraping (with the
rvest
package), the function elx_fetch_data()
enables us to fire GET requests to retrieve data on documents with known
identifiers (including Cellar URI).
One of the most sought-after data in the Eur-Lex dataverse is the text. It is possible now to automate the pipeline for downloading html and plain texts from Eur-Lex. Similarly, you can retrieve the title of the document. For both you can specify also the desired language (English by default). Other metadata might be added in the future.
# the function is not vectorized by default
# elx_fetch_data(url = results$work[1], type = "title")
# we can use purrr::map() to play that role
library(purrr)
# wrapping in possibly() would catch errors in case there is a server issue
dir_titles <- results[1:5,] %>% # take the first 5 directives only to save time
mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
"title")) %>%
as_tibble() %>%
select(celex, title)
print(dir_titles)
Note that text requests are by far the most time-intensive; requesting the full text for thousands of documents is liable to extend the run-time into hours. Texts are retrieved from html by priority, but methods for .pdfs and .docs are also implemented.3 The function even handles multi-document resources (by pasting them together).
In this section I showcase a simple application of
eurlex
on making overviews of EU legislation. First, we
collate data on directives.
dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
elx_run_query()
Let’s calculate the proportion of directives currently in force in the entire set of directives ever adopted. This variable offers a particularly good demonstration of the usefulness of the package to retrieve EU law data, because it changes every day, as new acts enter into force and old ones drop out. Regularly scraping webpages for this purpose and scale is simply impractical and disproportional.
Directives become naturally outdated with time. It might be all the more interesting to see which older acts are thus still surviving.
dirs %>%
filter(!is.na(force)) %>%
mutate(date = as.Date(date)) %>%
ggplot(aes(x = date, y = celex)) +
geom_point(aes(color = force), alpha = 0.1) +
theme(axis.text.y = element_blank(),
axis.line.y = element_blank(),
axis.ticks.y = element_blank())
We want to know a bit more about some directives from the early 1970s that are still in force today. Their titles could give us a clue.
dirs_1970_title <- dirs %>%
filter(between(as.Date(date), as.Date("1970-01-01"), as.Date("1973-01-01")),
force == "true") %>%
mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
"title")) %>%
as_tibble()
print(dirs_1970_title)
I will use the tidytext
package to get a quick idea of
what the legislation is about.
library(tidytext)
library(wordcloud)
# wordcloud
dirs_1970_title %>%
select(celex,title) %>%
unnest_tokens(word, title) %>%
count(celex, word, sort = TRUE) %>%
filter(!grepl("\\d", word)) %>%
bind_tf_idf(word, celex, n) %>%
with(wordcloud(word, tf_idf, max.words = 40))
I use term-frequency inverse-document frequency (tf-idf) to weight the importance of the words in the wordcloud. If we used pure frequencies, the wordcloud would largely consist of words conveying little meaning (“the”, “and”, …).
This is an extremely basic application of the eurlex
package. Much more sophisticated methods can be used to analyse both the
content and metadata of European Union legislation. If the package is
useful for your research, please cite the accompanying
paper.4
Note, however, that not all resource types will work properly with the pre-specified query.↩︎
Occasionally, you may encounter legal acts without CELEX numbers, especially when digging through older legislation. It is good to report these to the Eur-Lex helpdesk.↩︎
It is worth pointing out that the html and pdf contents of older case law differs. Whereas typically the html file is only going to contain a summary and grounds of a judgment, the pdf should also contain background to the dispute.↩︎
Michal Ovádek (2021) Facilitating access to data on European Union laws, Political Research Exchange, 3:1, DOI: 10.1080/2474736X.2020.1870150↩︎