We're still at the Outlook through April 2014.  We seek an alternative venue for May and beyond. Please see New Venue Desirements below and keep them in mind as you move around Boulder. 


(red star)  Note that we'll start at 4:15 this month due to our speaker's schedule.

(red star) This month marks the 3rd anniversary of the BESSIG!

An Easy Bake Semantic Metadata Repository for Scientific Data

Mik Cox, Tyler Traver, Anne Wilson, Doug Lindholm, Laboratory for Atmospheric and Space Physics (LASP), Don Elsborg, CU Faculty Affairs

 

This presentation will discuss the use of open source tools and the tasks that remained to create a semantically enabled metadata repository.

The LASP Interactive Solar Irradiance Data Center, LISIRD, is a web site that serves the lab's solar irradiance and related data products to the public. LISIRD provides information about the data it offers as part of its web page content, embedded in static HTML. At the same time, other LASP web sites also provide the same information, such as sites pertaining to specific missions or education and outreach.  Keeping data set information updated and in sync across web sites is a problem. Nor is the information interoperable with emerging search and discovery tools.

To address this and other issues, we created a semantically enabled metadata repository that holds information about our data.  In conjunction, we prototyped a new implementation of LISIRD that dynamically renders page content, pulling metadata from the repository and including in the page current, vetted metadata from a single, definitive source. Other web pages can similarly pull this information if they choose.  Additionally we can now offer new semantic browse and search capabilities, such as search of data sets by type (currently spectral solar irradiance, total solar irradiance, and solar indices) or over a particular spectral range provided by the user.

We can also render the metadata in various formats understandable to other communities, such as SPASE for the heliophysics community and ISO for the international community. This will allow us to federate with sites that use those formats, allowing broader discovery of our data.

To date, metadata management at LASP has generally been done on a per project, ad hoc basis. We are building applications on top of the repository that provide CRUD (create, read, update, delete) capabilities for metadata records to metadata 'owners' and 'curators'. We expect this to help data managers to store and manage their metadata in a more rigorous fashion should they choose to use it.

We heavily leveraged existing open source tools to create the repository. In this talk we'll talk about using VIVO to create a semantic database, LaTiS to fetch data and metadata, and AngularJS to write dynamic, testable JavaScript.  We'll describe our experiences extending two existing ontologies to meet our space physics domain needs.

With these tools and some student time (though our students are exceptional) we are achieving significantly increased capabilities at a relatively low cost. We believe this tool combination could help projects with limited resources achieve similar capabilities to manage and provide access to metadata.

And, if that's not easy-bake enough for you, try this PC EZ-Bake Oven, made especially for geeks: http://www.thinkgeek.com/stuff/41/ezbake.shtml.  


Schedule  (mostly)

4:15 - 5:xx presentation

5:xx - 6:00 social

 

New Venue Desirements

Free, or cost based on attendance

Can purchase food and beverages, or within walking distance of such 

Easy to get to, easy to park, in Boulder

Separate room

Projection capability

Internet connectivity

hours 4:00 - 6:00 Tu or Wed, 2nd, 3rd, or 4th week of the month, flexible

 

We're still at the Outlook through April 2014.  We seek an alternative venue for May and beyond. Please see New Venue Desirements below and keep them in mind as you move around Boulder.  

(red star)  Note that we're meeting on a Tuesday rather than a Wednesday this month due to room availability.   We're back in the Chatauqua room at the Boulder Outlook Hotel.

Earth System CoG and the Earth System Grid Federation: A Partnership for Improved Data Management and Project Coordination

Sylvia Murphy, Cecelia DeLuca, Allyn Treshansky, NOAA/CIRES, Luca Cinquini, JPL/NOAA

The Earth System CoG Collaboration Environment, led by a NOAA ESRL/CIRES team, is partnering with the DOE-led Earth System Grid Federation (ESGF) data archive to deliver a capability that will enable users to store, federate, and search scientific datasets, and manage and connect the projects that produced those datasets.  

ESGF is an international network of data nodes that is used to host climate data sets, including the model outputs from the Coupled Model Intercomparison Project (CMIP), which supported the Intergovernmental Panel on Climate Change (IPCC) assessment reports.  ESGF data nodes are federated, so that all data holdings are visible from any of the installation sites.  An ESGF data node is now installed at NOAA’s Earth System Research Laboratory (ESRL’s).  It currently hosts data from the Dynamical Core Model Intercomparison Project (DCMIP) and Twentieth Century Reanalysis data from ESRL’s Physical Sciences Division.

CoG is a collaboration environment and connective hub for networks of projects in the Earth Sciences.  It hosts software development projects, model intercomparison projects, and short university-level courses. It includes a configurable search to data on any ESGF node, metadata collection and display,  project-level wikis, and a host of other capabilities. There are 74 projects currently using the system.

CoG is partnering with the international Earth System Model Documentation (ES-DOC) project, funded by both NOAA and the EU’s Infrastructure for the European Network for Earth System Modeling (IS-ENES) project. ES-DOC is developing tools that capture, display, and compare Earth system model metadata. This information can be linked directly from a CoG project or attached to specific datasets in the ESGF node.

This presentation will provide an overview of both CoG and ESGF, demonstrate data discovery and download, and key CoG capabilities using relevant example projects.

CoG: https://earthsystemcog.org/ 

ESRL ESGF data node: http://hydra.fsl.noaa.gov/esgf-web-fe/ 


Schedule  (mostly)

4:00 - 5:xx presentation

5:xx - 6:00 social

 

New Venue Desirements

Free, or cost based on attendance

Can purchase food and beverages, or within walking distance of such 

Easy to get to, easy to park, in Boulder

Separate room

Projection capability

Internet connectivity

hours 4 - 6:00 Tu or Wed, 2nd, 3rd, or 4th week of the month, flexible


 

We're still at the Outlook through April 2014.  We seek an alternative venue for May and beyond. Please see New Venue Desirements below and keep them in mind as you move around Boulder.  

(red star)  Note that this meeting will be held in the Panorama Room of the Outlook Hotel instead of our usual Chatauqua room.  This means that we won't have a server and food and drinks must be ordered in the restaurant.

Accessing Data Instead of Ordering Data: A New Normal

Michael Little, the Advanced Development Systems Engineer at the Atmospheric Science Data Center (ASDC)

Mike will describe how the new generation of research objectives will need to avoid staging data locally from multiple modeling and observational repositories.  Rather, new access methods will present a machine-to-machine interface which permits codes and software applications to retrieve small increments of data continuously as part of the processing.  The ASDC's Data Acess architecture will be described with a particular emphasis on iRODS as one of the most promising tools for remote access to data held in earth science data centers.

Mike's slides for this talk are available here:  DataDistributionArchitecture_0.4.3.pptx.

Schedule (mostly)

4:00 - 5:xx presentation

5:xx - 6:00 social

New Venue Desirements

Free, or cost based on attendance

Can purchase food and beverages

Easy to get to, easy to park, in Boulder

Separate room

Projection capability

Internet connectivity

hours 4 - 6:00 Tu or Wed, 2nd, 3rd, or 4th week of the month, flexible

We're still at the Outlook through April 2014.  We seek an alternative venue for May and beyond. Please see New Venue Desirements below and keep them in mind as you move around Boulder.  

Deep Carbon Observatory - Data Science and Data Management Infrastructure Overview and Demonstration

Patrick West, Rensselaer Polytechnic Institute

The Deep Carbon Observatory (DCO) brings together hundreds of organizations and individuals from all around the world, spanning a great many scientific domains with a focus on Carbon. The DCO Data Science team is anticipating the generation of terabytes of information in the form of documents, scientific datasets from level 0 to data products and visualizations, information about events, people, and organizations, and more. So how do we keep track of all of this information, manage the information, and disseminate the information?

In order to organize all of this information and provide the research community the tools necessary to collaborate and do their research, the DCO Data Science team is putting together a suite of tools that will integrate all of these components in a seamless, distributed, heterogeneous environment. This presentation and demonstration will provide an overview of the work that we, the DCO Data Science team, are doing to provide such an environment.

Due to Patrick's schedule, we'll plan on starting at 4:15 instead of 4:00.

Here are Patrick's slides:  http://tw.rpi.edu/web/doc/DCO-DS-Overview-Demonstration-BESSIG.

 

Schedule (mostly)

4:15 - 5:xx presentation

5:xx - 6:00 social

New Venue Desirements

Free, or cost based on attendance

Can purchase food and beverages

Easy to get to, easy to park, in Boulder

Separate room

Projection capability

Internet connectivity

hours 4 - 6:00 Tu or Wed, 2nd, 3rd, or 4th week of the month, flexible

Our meeting this month is a special event for several reasons.   Copies of Andrew's book will be available to the first 50 attendees, and the HDF Group will be providing refreshments for us.  Also, this may be our last meeting at the Boulder Outlook Hotel, as the hotel has been sold.  So, please join us in the Crown Rock room (not our usual room) at the Outlook for:

Improving Science with Open Formats and High-Level Languages: Python and HDF5

Andrew Collette, Laboratory for Atmospheric and Space Physics (LASP)

This talk explores how researchers can use the scalable, self-describing HDF5 data format together with the Python programming language to improve the analysis pipeline, easily archive and share large datasets, and improve confidence in scientific results.  The discussion will focus on real-world applications of HDF5 in experimental physics at two multimillion-dollar research facilities: the Large Plasma Device at UCLA, and the NASA-funded hypervelocity dust accelerator at CU Boulder.  This event coincides with the launch of a new O’Reilly book, Python and HDF5: Unlocking Scientific Data, complimentary copies of which will be available for attendees.

As scientific datasets grow from gigabytes to terabytes and beyond, the use of standard formats for data storage and communication becomes critical.  HDF5, the most recent version of the Hierarchical Data Format originally developed at the National Center for Supercomputing Applications (NCSA), has rapidly emerged as the mechanism of choice for storing and sharing large datasets.   At the same time, many researchers who routinely deal with large numerical datasets have been drawn to the Python by its ease of use and rapid development capabilities. 

Over the past several years, Python has emerged as a credible alternative to scientific analysis environments like IDL or MATLAB.  In addition to stable core packages for handling numerical arrays, analysis, and plotting, the Python ecosystem provides a huge selection of more specialized software, reducing the amount of work necessary to write scientific code while also increasing the quality of results.  Python’s excellent support for standard data formats allows scientists to interact seamlessly with colleagues using other platforms.

Schedule (more or less)

4:00 - 5:00 presentation
5:00 - 6:00 social

Regridding of data is a common problem faced by many scientific software developers.   If regridding is part of your world, this talk may be of interest to you.  Come join us at the Boulder Outlook Hotel for this month's talk:

There is more to conservative interpolation--- interpolating edge and face centered fields in the geo-sciences

Alexander Pletzer, Tech-X

Interpolation is one of the most widely used postprocessing tasks, according to a survey of Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) users. Most geo-postprocessing tools (UV-CDAT, NCL, Ferret, etc) support a choice of both bilinear and conservative regridding with conservative interpolation guaranteeing that the total amount of "stuff" (energy, water, etc) remains unchanged after regridding. The SCRIP and ESMF are examples of libraries implementing these interpolation methods.

We argue that the type of interpolation is dictated by the type of field and that cell centered fields require conservative interpolation whereas nodal fields require bilinear (or higher order) interpolation. Moreover, the wind velocity fields used by finite-volume atmospheric codes, which are neither cell-centered nor nodal but face-centered (Arakawa D staggering), require different interpolation formulas. Interpolation formulas of face-centered and edge-centered (Arakawa C) fields have been known as Whittney forms since 1957 and are widely used in electromagnetics. We present interpolation methods new to the geo-sciences that conserve flux and line integrals for Arakawa D, respectively Arakawa C, stagggered fields.

This talk should be of interest to anybody in need to regrid velocity and other vector fields whose components are staggered with respect to each other.

Schedule (mostly)

4:00 - 5:00 Presentation
5:00 - 6:00 Social

"Code without tests is bad code.  It doesn't matter how well written it is; it doesn't matter how pretty or object-oriented or well-encapsulated it is.  With tests, we can change the behavior of our code quickly and verifiably.  Without them, we really don't know if our code is getting better or worse."   [FEATHERS]

A strong statement, but it does bring home the vital role of testing in software development.  Join us at the Boulder Outlook Hotel for:

Strategies, motivations, and influencing adoption of testing for scientific code

Ian Truslove, Erik Jasiak, NSIDC

Computation and programming are increasingly inescapable in modern Earth Sciences, but scientists and researchers receive little or no formal software engineering or programming training.  At the same time, research into the reproducibility of other academic papers exposing disappointingly low rates of repeatability and high-profile retractions due to computational or data errors increase the onus on researchers to write repeatable, reliable, even reusable programs; in other words, "write better code".

Software engineering has plenty to say on the matter of "better code": metrics, methodologies, processes, tools...  Of course, none are indisputable and none provide absolute guarantees.  One seemingly obvious technique - testing - has enjoyed a renaissance in incarnations such as unit testing, and with approaches such as test-driven development (TDD) and behavior-driven development (BDD).

Based on our experience at the National Snow and Ice Data Center (NSIDC) with unit testing, TDD and BDD, we present a set of recommendations to scientific and research programmers about some techniques to try in their day to day programming, and possibly provide some inspiration to aim for more comprehensive approaches such as BDD. We will highlight some use cases of various types of testing at the NSIDC, discuss some of the cultural and management changes that occurred for programmers, scientists and project managers to consider and adopt processes such as TDD, make recommendations about how to introduce or expand rigorous code testing practices in your organization, and discuss the likely benefits in doing so.

[Scroll down to see post presentation references and material.]

Schedule

4:00 - 5:00  Presentation

5:00 - 6:00  Social

All are welcome.

Post Presentation References and Material

Presentation slides

Wilson et al, "Best Practices for Scientific Computing", highly recommended!

Merali, "Computational science: ... Error"  in Nature.  "As a general rule, researchers do not test or document their programs rigorously, and they rarely release their codes, making it almost impossible to reproduce and verity published results generated by scientific software, say computer scientists."

Good books for unit testing, TDD, and higher-level tests

Freeman and Pryce, Growing Object-Oriented Software, Guided by Tests

Beck, Test Driven Development: By Example

Fowler, Mocks Aren't Stubs - Martin Fowler on the terminology and usage of mocks, stubs, test doubles - all those "fake collaborators"

A couple of Bob Martin's books are particularly noteworthy for covering lots and lots of desirable attributes for code

Martin, Agile Software Development, Principles, Patterns, and Practices

Martin, Clean Code: A Handbook of Agile Software Craftsmanship

Some test frameworks

JUnit - the original (Java)

RSpec (Ruby)

Behave (Python)

Jasmine (JavaScript)

mgunit (IDL)

pfUnit (FORTRAN)

Cucumber  (Acceptance tests, lots of languages)

Also

Feathers, Michael C., Working Effectively with Legacy Code , Prentice Hall, 2005, p. xvi.

Snowden,  Cynefin: Wikipedia on CynefinDavid Snowden introducing Cynefin (video) - applicable to knowledge management, cultural change, and community dynamics, and has also involved issues of organizational strategy.

Snowden, Boone, 2007 "A Leader's Framework for Decision Making" (must pay for access from Harvard Business Review, though perhaps available elsewhere)

NSIDC's and ultimately Boulder's loss of Mark Parsons is RDA's gain.  But maybe that's better for the world as a whole.  Join us at the Boulder Outlook Hotel on August 21 to hear about the Alliance Mark has joined.

The Research Data Alliance: Creating the culture and technology for an international data infrastructure

Mark Parsons, Managing Director, Research Data Alliance/U.S.

All of society’s grand challenges -- be it addressing rapid climate change, curing cancer and other disease, providing food and water for more than seven billion people, understanding the origins of the universe or the mind -- all of them require diverse and sometimes very large data to to be shared and integrated across cultures, scales, and technologies. This requires a new form and new conception of infrastructure. The Research Data Alliance (RDA) is creating and implementing this new data infrastructure. It is building the connections that make data work across social and technical barriers. 

RDA launched in March 2013 as a international alliance of researchers, data scientists, and organizations to build these connections and infrastructure to accelerate data-driven innovation. RDA facilitates research data sharing, use, re-use, discoverability, and standards harmonization through the development and adoption of technologies, policy, practice, standards, and other deliverables. We do this through focussed Working Groups, exploratory Interest Groups, and a broad, committed membership of individuals and organizations dedicated  to improving data exchange. 

What data sharing problem are you trying to solve?  Find out how RDA can help.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Please join us!

In July we're meeting in the 4th week of the month, rather than the 3rd.  Please join us at the Boulder Outlook Hotel for our own Ted talk:

HDF and The Earth Science Platform

Ted Habermann, The HDF Group

Interoperable data and understanding across the Earth Science community requires convergence towards a standard set of data formats and services, metadata standards, and conventions for effective use of both. Although large legacy archives still exist in netCDF3, HDF4, and many custom formats, we have achieved considerable convergence in the data format layer with the merger of the netCDF4 and HDF5 formats. The way forward seems clear as more groups in many disciplines join the HDF5 community. The data service layer has experienced similar convergence as OGC Service Standards are adopted and used in increasing numbers and connections across former chasms are deployed (ncWMS, ncSOS, netCDF/CF as OGC Standards). Many data providers around the world are in the process of converging towards ISO Standards for documenting data and services. Connections are also helping here (ncISO). Many groups are now working towards convergence in the conventions layer. The HDF-EOS and Climate-Forecast conventions have been used successfully for many datasets spanning many Earth Science disciplines. These two sets of conventions reflect different histories and approaches that provide a rich set of lessons learned as we move forward.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Stop on by!

This month we'll meet again the Boulder Outlook Hotel for:

Py in the Sky: IPython and other tools for scientific computing

Monte Lunacek, Application Specialist, CU Research Computing

Roland Viger, Research Geographer, USGS

Python offers a rich toolkit that is useful for scientific computing.  In this talk, we will introduce the IPython package and discuss three useful components: the interactive shell, the web-based notebook, and the parallel interface.  We will also demonstrate a few concepts from the Pandas data analysis package and, time permitting, offer a few tips on how to profile and effortlessly speedup your python code.  This talk will describe and illustrate these tools with example code.  If Python is not your favorite programming language, this overview might change that.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Come on by!

Note that this month we are meeting on a Tuesday instead of a Wednesday!

Please join us at the Boulder Outlook Hotel for a presentation and demo of:

NOAA Earth Information Services and TerraViz

Eric Hackathorn, Julien Lynge, Jeff Smith, TerraViz, NOAA

Jebb Stewart, Chris MacDermaid, NEIS, NOAA

The NOAA Earth Information Services (NEIS) is a framework of layered services designed to help the discovery, access, understanding, and visualization of data from the past, present, and future. It includes a visualization component named TerraViz that is a multi-platform tool, running on desktops, web browsers, and mobile devices. The goal is to ingest "big data" and convert that information into efficient formats for real-time visualization. Designed for a world where everything is in motion, NEIS and TerraViz allow fluid data integration and interaction across 4D time and space, providing a tool for everything NOAA does and the people NOAA affects.

TerraViz is built using the Unity game engine.  While a game engine may seem a strange choice for data visualizations, our philosophy is to take advantage of existing technology whenever possible.  Video games are a multibillion-dollar industry, and are quite simply the most powerful tools for pushing millions of points of data to the user in real-time. Our presentation illustrated displaying environmental data in TerraViz at a global scale, visualizing regional data in “scenes” such as the flooding of the Washington DC area or rotating a coastal ecosystem in three axes, and developing environmental simulations/games like exploring the ocean floor in a submarine.

The NEIS backend similarly takes lessons from private industry, using Apache Solr and other open source technologies to allow faceted search of NOAA data, much as sites like Amazon and Netflix do.

We believe that to have an impact on society, data should be easy to find, access, visualize, and understand.  NEIS simplifies and abstracts searching, connectivity, and different data formats, allowing users to concentrate on the data and science.

Please contact us if you want to explore including your environmental data within NEIS/TerraViz or if you want to talk to us about developing custom visualizations or educational simulations to showcase your important data.

NOAA / Earth System Research Lab / Global Systems Division, Boulder, Colorado

NEIS/TerraViz: NEIS/TerraViz

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

This month marks the two year anniversary of the BESSIG!   Please join us at the Boulder Outlook Hotel for a remote presentation: 

Chris Lynnes, Chief Systems Engineer of the Goddard DAAC, NASA, "The Earth Science Collaboratory"

The Earth Science Collaboratory is a proposed framework for supporting the sharing within the Earth science community of data, tools, analysis methods, and results, plus all the contextual knowledge that go with these artifacts.  The likely benefits include:

  • access to expert knowledge about how to work with data safely and efficiently
  • full reprocability of results
  • efficient collaboration within multi-disciplinary and/or geographically distributed teams
  • a social network to bring together researchers and data users with common interests

Currently, there are some nascent efforts to construct such a collaboratory.  However, by its very (inclusive) nature, this construction is likely to be most successful as an emergent process, evolving from many point-to-point connections to an eventual ecosystem of cooperating components supporting collaboration. 

In particular, we are actively seeking scientists and other potential users of such a collaboratory to provide an end user perspective of system functionality.   Would you find such a collaboratory helpful?   Do you have ideas about how it could be better?  Would you like to influence its design?  Those that are actively engaged will be heard and could end up with a tool that particularly suits their needs.   If this role interests you, please attend this talk and/or otherwise let us know of your interest.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Drop on by!

Post presentation material 

The slides for the talk are available here:ESC BESSIG slides.

The recorded version of the talk is available here.  Please note that the talk actually starts 21 minutes into the recording, as the first 15 minutes were intended to be for testing.  (Sorry, we had serious technical difficulties at the hotel!  It will be better next time!) 

Please join us at the Boulder Outlook Hotel for:

Doug Lindholm, LASP, "LaTiS: a data model, an API, a web service AND a floor wax"*

LaTiS is a data model, a data analysis API, and a REST-ful web service for accessing scientific data via a common interface.

The LaTiS data model provides a scientific domain independent, unifying, mathematical foundation for describing datasets that captures the functional relationships between parameters. The Scala implementation of this model provides an API for reading data directly from their native source, the ability to compute with high level abstractions appropriate for the task at hand, and options for filtering, transforming, and writing data in various formats.

This talk will discuss how these capabilities are used to enable a modular web service framework that can easily be installed and configured by a data provider, and that allows users to dynamically reformat a dataset, including its time representation, storage format, missing values, etc.

This talk will be a preview (i.e. beta release) of the talk I will give at UCAR Software Engineering Assembly Conference in April.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Come on by!

* [Open on suburban kitchen, Wife and Husband arguing]
Wife: New LaTiS is a floor wax!
Husband: No, new LaTiS is a data model!
Wife: It's a floor wax!
Husband: It's a data model!
Wife: It's a floor wax, I'm telling you!
Husband: It's a data model, you cow!
Spokesman: [enters quickly] Hey, hey, hey, calm down, you two. New LaTiS is both a floor wax and a data model! Here, I'll spray some on your mop. [sprays LaTiS onto mop] ..and some for your data server. [sprinkles LaTiS onto laptop]
[Husband computes while Wife mops]
Husband: Mmmmm, works great!
Wife: And just look at that shine! **

** with apologies to SNL

Due to the constraints of our speaker, we're meeting the 2nd week of February instead of the 3rd.

Yet more around semantics!   Please join us at the Boulder Outlook Hotel for:

Beth Huffer, Lingua Logica, "ODISEES: An Ontology-Driven Interactive Search Environment for Earth Sciences"

As part of an on-going effort at NASA Langley’s Atmospheric Science Data Center, and in cooperation with the Computational & Information Sciences & Technology Office at the Goddard Space Flight Center, we have developed a semi-automated method for finding and comparing equivalent data and climate model output variables across disparate datasets.  We will demonstrate an ontology-driven variable matching service that provides an automated mapping among comparable variables from multiple data products and climate model output products. The interactive user interface is driven by a queriable ontological model of the essential characteristics of data and climate model output variables, the products they occur in, the atmospheric parameters represented in the data, and the instruments and techniques used to measure or model the parameters. Queries of the ontology and triple store are used to match comparable variables by enabling users to search for those that share a user-specified set of essential characteristics. 

The application addresses an emerging need among Earth scientists to compare climate model outputs to other models and to satellite observations, and addresses some of the barriers that currently make such comparisons difficult.  In particular, the application

  • Eliminates the need for users to be familiar with the multiple data vocabularies and standards that exist within the Earth sciences community; and
  • With a few mouse clicks, provides ready access to the information needed by scientists to understand the similarities and differences between two or more data or climate model products, enabling them to quickly determine which products best suit their requirements.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social

Come on by!

More on semantics!  Please join us at the Boulder Outlook Hotel for:

Stephen Williams, Office of Faculty Affairs, CU Boulder, "VIVO, VITRO, DataStar, and Beyond - The VIVO Project"

The VIVO project was started at Cornell University in 2003 as a faculty profiling system for Mann Library.  The profiling system that is VIVO was designed in two parts, VITRO the semantic engine that is ontology agnostic and VIVO the ontology specific pages and data for presenting faculty profiles.  This concept of a two tied system was taken into the third tier with location specific changes (Cornell and CU-Boulder) and ontologies that build upon VIVO (data star).  This talk will focus on the VIVO project as a whole, its history, its ancillary projects, and its future.  We'll also try to cover difficulties and lessons in semantic programming and the experiences of building ETL tools for semantic data.

Schedule

4:00 - 5:00 Presentation

5:00 - 6:00 Social