Preserving Public GovernmentInformation: The End of TermWeb Archive Abbie Grotke, Library of Congress Tr...
Outline Background Nomination of URLs Data transfer Preparing for Access Demo of public interface 2012: ...
Themes Ad-hoc nature of this project Wellspring of web archiving tools Testbed for emerging tools
Collaborating Institutions Library of Congress Internet Archive California Digital Library University of North Texas ...
Why Archive .gov? Why Collaborate?  Fit with partner missions to collect and preserve at-risk (born-digital) go...
Project Goals Work collaboratively to preserve public U.S. Government Web sites at the end of the current preside...
URL Nomination Tool Facilitates collaboration Ingest seed listsfrom different sources Record known metadata  Bran...
Volunteer Nominators Call for volunteers targeted:  Government information specialists  Librarians  Politic...
Nominator To-Dos Nominate the most critical URLs for capture as "in scope" Add new URLs not already incl...
Nomination Tool
In Scope vs. Out of Scope In scope: Federal government Web sites (.gov, .mil, etc.) in the Legislative, Executive, o...
Prioritized URLs ~500 URLs nominated by volunteers
Selected Researcher/Curator Interests  Homeland Security  Department of Labor  Department of Treasury  Educati...
Nomination Tool Lessons Learned No coordination of selection or “assignments” so likely gaps in collection Start ...
Further Lessons Learned CDL: hard to respond to nominations to shape crawler settings & focus – defaulted to gett...
Questions for you: What do YOU need from the nomination tool? Is it nomination or curation?
Partner Harvesting Roles Internet Archive – Broad, comprehensive harvests Library of Congress – In-depth Legislative...
Crawl Schedule Two Approaches:  Broad, comprehensive crawls  Prioritized, selective crawls Key dates:  ...
Data Transfer Goal: Distribute 15.9 TB of collected content among partners LC‟s central transfer server used: ...
Transfer Tools: Bagger
Preparing for Access 1st Tuesday of each month, 12:00 pm:Anything No,to report ...
Internet Archive also had: MODS record extraction tool A full copy of the content from all EOT partners A QA “...
CDL had:
CDL and IA revise records to simpler Dublin Core
Caveats As with any web archive, the crawler is good, but not always perfect! Full-text index of 16 TB of data ...
Demo of Beta Interface
Access gateway lessons You could easily use this to integrate:  materials from multiple web archives, ...
Forthcoming Internet Archive tools for visualizing web archived data (“explore data”)
Tag cloud extracted from metadata
More questions to you: How might you use this archive?  Rediscovering, providing continued access...
What you can do Provide feedback on the Beta site Help nominate URLs for 2012:  Any nominations welcome, any a...
Timeframe for 2012 project2012 January 2012: Project team will begin accepting early nominations of priority websites vi...
Questions?eotproject@loc.govFollow us on twitter! @eotarchive
Preserving Public Government Information: The End of Term Web Archive
of 34

Preserving Public Government Information: The End of Term Web Archive

A presentation at the Fall 2011 Federal Depository Library Conference unveiling the End of Term Web Archive. This archive holds over 3000 US Government websites harvested from 2008-2009. http://eotarchive.cdlib.org
Published on: Mar 4, 2016
Published in: Education      Technology      
Source: www.slideshare.net


Transcripts - Preserving Public Government Information: The End of Term Web Archive

  • 1. Preserving Public GovernmentInformation: The End of TermWeb Archive Abbie Grotke, Library of Congress Tracy Seneca, California Digital Library END OF TERM ARCHIVE – FDLC Oct. 17, 2011
  • 2. Outline Background Nomination of URLs Data transfer Preparing for Access Demo of public interface 2012: what you can do Our questions for you / your questions for us!
  • 3. Themes Ad-hoc nature of this project Wellspring of web archiving tools Testbed for emerging tools
  • 4. Collaborating Institutions Library of Congress Internet Archive California Digital Library University of North Texas US Government Printing Office
  • 5. Why Archive .gov? Why Collaborate?  Fit with partner missions to collect and preserve at-risk (born-digital) government information  Potential for High Research Use/Interest in Archives  It Takes a Village  Experienced Partners
  • 6. Project Goals Work collaboratively to preserve public U.S. Government Web sites at the end of the current presidential administration ending January 19, 2009. Document federal agencies‟ presence on the Web during the transition of Presidential administrations. To enhance the existing research collections of the five partner institutions.
  • 7. URL Nomination Tool Facilitates collaboration Ingest seed listsfrom different sources Record known metadata  Branch  Title  Comment  Who nominated Create seed listsfor crawls
  • 8. Volunteer Nominators Call for volunteers targeted:  Government information specialists  Librarians  Political and social science researchers  Academics  Web archivists 31 individuals signed up to help
  • 9. Nominator To-Dos Nominate the most critical URLs for capture as "in scope" Add new URLs not already included in the list Mark irrelevant or obsolete sites as "out of scope" Add minimal URL metadata such as site title, agency, etc.
  • 10. Nomination Tool
  • 11. In Scope vs. Out of Scope In scope: Federal government Web sites (.gov, .mil, etc.) in the Legislative, Executive, or Judicial branches of government. Of particular interest for prioritization were sites likely to change dramatically or disappear during the transition of government Out of scope: Local or state government Web sites, or any other site not part of the above federal government domain Not captured: intranets, deep web content
  • 12. Prioritized URLs ~500 URLs nominated by volunteers
  • 13. Selected Researcher/Curator Interests  Homeland Security  Department of Labor  Department of Treasury  Education/“No Child Left Behind”  Health Care Reform  Stem Cell Research  Bush Administration Budget Justifications  Federal Program Assessments (ExpectMore.gov)
  • 14. Nomination Tool Lessons Learned No coordination of selection or “assignments” so likely gaps in collection Start with a blank slate rather than pre- populate the database? Admin tools/Reporting features need more work Engage more experts to help identify at- risk content
  • 15. Further Lessons Learned CDL: hard to respond to nominations to shape crawler settings & focus – defaulted to getting as much as we could. When used for Deepwater Horizon, we found it added an extra task for curators – used “delicious” instead. The metadata we did get (branch, description) was really valuable after the fact!
  • 16. Questions for you: What do YOU need from the nomination tool? Is it nomination or curation?
  • 17. Partner Harvesting Roles Internet Archive – Broad, comprehensive harvests Library of Congress – In-depth Legislative branch crawls University of North Texas – Sites/Agencies that meet current UNT interests, e.g. environmental policy, and collections, as well as several “deep web” sites California Digital Library – Multiple crawls of all seeds in EOT database; sites of interest to their curators Government Printing Office – Support and analysis of “official documents” found in collection
  • 18. Crawl Schedule Two Approaches:  Broad, comprehensive crawls  Prioritized, selective crawls Key dates:  ElectionDay, November 4  Inauguration Day, January 20
  • 19. Data Transfer Goal: Distribute 15.9 TB of collected content among partners LC‟s central transfer server used:  “Pulled” and “pushed” data from and to partners via Internet2, May 2009 – Mid 2010  Common transfer tools, specifications were keyMore info here: http://blogs.loc.gov/digitalpreservation/2011/07/the- end-of-term-was-only-the-beginning/
  • 20. Transfer Tools: Bagger
  • 21. Preparing for Access 1st Tuesday of each month, 12:00 pm:Anything No,to report nothing toon report onpublic publicaccess? access.
  • 22. Internet Archive also had: MODS record extraction tool A full copy of the content from all EOT partners A QA “Playback” tool (takes screen images of archived materials) An export of the Nomination Tool metadata from UNT
  • 23. CDL had:
  • 24. CDL and IA revise records to simpler Dublin Core
  • 25. Caveats As with any web archive, the crawler is good, but not always perfect! Full-text index of 16 TB of data  Some behaviors designed to help rank and navigate such a large body of content
  • 26. Demo of Beta Interface
  • 27. Access gateway lessons You could easily use this to integrate:  materials from multiple web archives, no matter where.  anydigital content, whether harvested or scanned.
  • 28. Forthcoming Internet Archive tools for visualizing web archived data (“explore data”)
  • 29. Tag cloud extracted from metadata
  • 30. More questions to you: How might you use this archive?  Rediscovering, providing continued access to „lost‟ documents?  Researching, visualizing trends in government data?
  • 31. What you can do Provide feedback on the Beta site Help nominate URLs for 2012:  Any nominations welcome, any amount of time you can contribute  Need particular help with:  Judicial Branch websites  Important content or subdomains on very large websites (such as NASA.gov) that might be related to current Presidential policies  Government content on non-government domains (.com, .edu, etc.)
  • 32. Timeframe for 2012 project2012 January 2012: Project team will begin accepting early nominations of priority websites via the Nomination Tool. Summer 2012: Recruitment of curators/nominators to help identify additional websites for prioritized crawling. July/August 2012: Bookend (baseline) crawl of government web domains begins. Summer/Fall 2012: Partners will crawl various aspects of government domains at varying frequencies, depending on selection polices/interests. Team will determine strategy for crawling prioritized websites. November - February 2012-13: Crawl of prioritized websites.2013 January 2013: Depending on the outcome of the election, focused crawls will be conducted as needed during this period. Spring or Summer 2013: Bookend crawl, plus additional crawl of prioritized websites as determined by team.
  • 33. Questions?eotproject@loc.govFollow us on twitter! @eotarchive