Welcome to the govinfo Developer Hub. As part of GPO's continuing mission to Keep America Informed, we are making it easier for developers and the public at large to access and work with the information available on govinfo.
GPO on GitHub
GPO uses GitHub to provide documentation to the developer community about our content, metadata, and processes used to generate it. Additionally, GPO's GitHub repositories help users retrieve govinfo content and metadata programatically or in bulk, including supporting resources and user guides. Data users can also submit and track feedback on GPO's GitHub repositories.
- api - services to access govinfo content and metadata
- bill-status - sample Bill Status XML files and user guide
- bulk-data - user guides for XML on the govinfo Bulk Data Repository
- collections - information about FDsys metadata, including regular expressions
- link-service - create links to content and metadata
- rss - notifications for new govinfo content and metadata
- sitemap - sitemaps to crawl for content and metadata
- uslm - United States Legislative Markup (USLM) XML Schema
Our API is intended to provide data users with a simple means to programmatically access govinfo content and metadata, which is stored in self-describing packages. This initial release provides functionality to retrieve lists of packages added or modified within a given time frame, summary metadata for packages, direct access to content and metadata formats, and equivalent granule information. We are continually adding new features - submit ideas for enhancements here.
Bulk Data Repository
GPO provides the capability to download XML in bulk for select collections from our Bulk Data Repository. This allows developers to grab large sets of rich, structured XML data more easily for their own mashups.
GPO is a member of the Legislative Branch Bulk Data Task Force, which was mandated in a committee report accompanying the House Legislative Branch Appropriations Bill for FY2013. Several legislators released a statement on June 6, 2012, stating their goal is to “provide bulk access to legislative information to the American people without further delay.”
User guides for select XML bulk data sets can be found on GPO's GitHub account.
GPO currently offers the following XML content for bulk download:
- Congressional Bill Text – House Bills beginning in 2013, Senate Bills added in January 2015
- Congressional Bill Status – 113th Congress to Present
- Congressional Bill Summaries – House Bill Summaries added in 2014, summaries for Senate Bills added in January 2015
- Code of Federal Regulations (Annual Edition) – 1996 to Present
- Electronic Code of Federal Regulations (current XML file for each of the titles in the eCFR)
- Federal Register – 2000 to Present
- United States Government Manual – 2011 to Present
- Public Papers of the Presidents of the United States – 2009, 2010, 2011
- Privacy Act Issuances – 2011, 2013, 2015
- House Rules and Manual – 114th Congress (2015-2016)
The govinfo link service enables users and developers to develop query- and parameter-based links to govinfo content and metadata. This has all of the same functionality as the FDsys link service , but with a few additional link types and an easier to use set of documentation.This link service has been built and documented using the Open API Spec and Swagger UI .
The parameters below are also available for the govinfo link service. High-level changes are:
- link-type value of "contentdetail" is now "details"
- the addition of the "related" and "context" link-type values. These additional link types will provide access to the equivalent tabs on a govinfo details page, where available.
The link service is used to create embedded links to content and metadata and is currently enabled for the collections below. More information about each query is provided in the documentation.
- Code of Federal Regulations (CFR)
- Compilation of Presidential Documents (CPD)
- Congressional Bills (BILLS)
- Congressional Calendars (CCAL)
- Congressional Committee Prints (CPRT)
- Congressional Documents (CDOC)
- Congressional Hearings (CHRG)
- Congressional Record - Daily (CREC)
- Congressional Reports (CRPT)
- Federal Register (FR)
- Public and Private Laws (PLAW)
- Statutes at Large (STATUTE)
- United States Code (USCODE)
We also provide users with easy access to notifications when new content is made available through RSS feeds. We currently have feeds available for each collection, as well as separate feeds for bulk data collections and individual feeds for each court.
Our RSS feeds currently provide notice for new as well as updated versions of content. From time to time, when we need to do reprocessing of large volumes of content, we may suspend the notifications of updates for an individual collection. This is done to prevent the individual RSS feeds from having a large number of old packages reappearing for the majority of users. Users interested in seeing when any package in a given collection is updated should also consider leveraging our sitemaps functionality.
RSS feeds are commonly used on blogs (weblogs), news websites, and other places with frequently updated content. RSS is an easy way to keep up with news and information that's important to you. By subscribing to an RSS feed, you can have content delivered directly to you without receiving an email.
RSS feeds, (which have the extension ".xml", ".rss", ".sfm", ".cfm", ".rdf", ".aspx", or ".php"), require installation and use of RSS aggregator software. An RSS aggregator allows you to subscribe to an RSS feed. There are many aggregators available; some are free and some are available for sale.
An RSS aggregator gathers material from websites that you tell it to scan, and it brings new information from those sites to you. It's a convenient format because it allows you to view all the new content from multiple sources in one location on your desktop or mobile device.
We also have sitemaps available to help developers crawl the entirety of the public dataset. This sitemap provides a hierarchical list of the full content available from the site, which is updated automatically as new content is added. This allows crawlers to efficiently determine whether new content is available and whether additional crawling is needed.