Category Archives: Wikimaps Tools

Introducing MetaPipe

update: MetaPipe was renamed as GLAMpipe

Background

I have previously made a tool called Flickr2GWToolset. It is a simple tool for editing metadata of Flickr images and exporting this data to XML file for GLAM-Wiki Toolset. The tool was aimed mainly for GLAM collections metadata. As you can see below, the user interface of Flickr2GWToolset is rather complicated.

The lesson learned from that project was that the problem with designing this kind of tool is how to make all the functionality available to the user without scaring the user. The user interface becomes complicated and adding new features makes it even more complicated.

flickr2gwtoolset_shot

Flickr2GWToolset user interface

For me, it seems obvious that extending user interfaces like seen above to include more and more functionality is a dead end (or, it would require a super-talented designer). Still, even if there was such designer, one fundamental problem remains.

The remaining problem is that, after the metadata is processed, there is no any clue about what was done with the data. The history of actions is not there. If someone asks “what did you do with the metadata?”, then one can just try to remember what were the actual steps. What if I could just show what I did? Or even better, re-run my process with different dataset?

At this point programmers raise their hands and say: “Just write scripts and then you can process any number of datasets”. That is true. Still, this approach has some problems.

The first one is obvious. How to write scripts if you are not a programmer? Second problem is re-usability. When someone writes scripts for example for processing metadata for a Wikimedia Commons upload, the results are often “hack until it works” type of scripts. This means awkward hacks, hardcoded values and no documentation (at least this is how my scripts look like). This makes re-using other programmers’ scripts very difficult, and people keep re-inventing the wheel over and over again.

Third problem is more profound. This is related to the origin of data and I’ll deal with it in next chapter.

Collection metadata vs. machine data

When speaking of tools for (meta)data manipulation, it important to define what kind of data we are dealing with. I make here a distinction between machine[c] data and collection data .

A server log file (a file that can tell what web pages are viewed and when, for example) is machine data. It is produced by computer, it has consistent structure *and* content.  You can rely on that structure when you are manipulating the data. If there is a date field, then there is date in certain format with no exceptions. When processing is needed, a script is created, it is tested, and finally executed. The data is now processed. There is no need to edit this data by hand in any point. Actually, hand editing would endanger the reliability of the data.

On the contrary, collection data has “human nature”. It is produced by humans during some time period. This time period can include several changes in a way data was produced and structured. When this data is made publicly accessible, it has usually consistent structure but there might be various inconsistencies in the content structure and semantics.

For example, “author” can contain name or names, but it can also contain dates of birth or death, or it can even contain descriptions about authors. Or “description” field can contain just few words or it can include all the possible information about the target which could not be fitted in anywhere else in the data structure (and that should be placed somewhere else in upload).

This kind of data can be an algorithmic nightmare. There are special cases and special cases of special cases, and it would be almost impossible to make an algorithm that could deal every one of them. Often you can deal with 90 or 99 percent of cases. For the rest it might be  easiest to just edit data manually.

When working with this kind of data, it is important that one can make manual edits during the process which is difficult when data is processed with scripts only.

MetaPipe GLAMpipe by WMFI

GLAMPipe (we are searching better name) relies the concepts of visual programming and node-based editing on its user interface. Both are based on visual blocks that can be added, removed and re-arranged by the user. The result is both a visual presentation and an executable program. Node-based user interface is not a new idea, but for some cases it is a good idea. There is a good analysis of node-based (actually flow-based which is a little different thing) here: http://bergie.iki.fi/blog/inspiration-for-fbp-ui/. Below you can see an example of visual programming with Scratch. Can you find out what happens when you click the cat?

Visual programming with Scratch. Image by scratch.mit.edu, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=35453328

Visual programming with Scratch. Image by scratch.mit.edu, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=35453328

GLAMpipe combines visual programming and node-based editing very loosely. The result of using nodes in GLAMpipe is a program that processes data in a certain way.  You can think nodes as modifiers or scripts that are applied to a current dataset.

A simple MetaPipe project.

A simple GLAMpipe project.

Above you can see a screenshot of GLAMpipe showing a simple project. There is a Flickr source node (blue), which brings data to collection (black). Then there are two transform nodes (brownish). The first one extracts the year from the “datetaken” field of the data and puts the result to a field called “datetaken_year”. The second one combines “title” and extracted year with comma and saves the result to a new field called “commons_title”.

Here is one record after the transforms nodes have been executed:

ORIGINAL title: “Famous architect feeding a cat”
ORIGINAL datetaken: 1965-02-01 00:00:00
NEW datetaken_year: 1965
NEW commons_title: “Famous architect feeding a cat, 1965”

Note that the original data remains intact. This means one can re-run transform nodes any number of times. Let’s say that you want to have the year in the commons title inside brackets like this: “Famous architect feeding a cat (1965)”. You can add brackets around “datetaken_year” in transform node’s settings. Then just re-run the node and new values are written to “commons_title” field.

The nodes have their own parameters and settings. This means that all information about editing process is packed in projects node-setup. Now, if someone asks “How did you create commons title names for your data?”  I can share this setup. And even more, one can change the dataset and re-run nodes by replacing source node with a new source node with similar data structure. So if one want to process some other Flickr image album, this can be done with replacing source node which points to different album.

However, we are still fiddling with different kind of options how UI should work.

Nodes

Nodes are the building blocks of data manipulation in GLAMpipe. Nodes can import data, transform data, download files, upload content or export content to a certain format.

Some examples of currently available nodes:

  • Flickr source node. It needs an Flickr API key (which is free) and the album id. When executed, it imports data to the collection.
  • File source node. Node reads data from file and imports it to the collection. Currently it accepts data in CSV or TSV formats.
  • wikitext transform node. This maps your data fields to Photograph or Map template and writes wikitext to a new field to the collection.
  • Flickr image url lookup node. This will fetch urls for different image sizes from Flickr and writes info to collection
  • Search and replace transform node. This nodes searches string, replaces it and writes result back to collection by creating a new field.

Below is a screencast of using georeferencer node. Georeferencer is a view node. View node is basically a web page, that can fetch and alter data via GLAMpipe API.

Technically nodes are json files, that include several scripts. You can find in more depth information here: https://github.com/artturimatias/metapipe-nodes

How to test?

GLAMpipe is work in progress state. Do not expect things just to work.

Installing GLAMpipe currently requires some technical skills. GLAMpipe is a server software but only way to test it now is to install it to your own computer. There are directions for installation on Linux. For other operating systems installation should be also possible, but that is not tested.

https://github.com/artturimatias/GLAMpipe

 

My assignment (Ari) in Wikimaps 2.0 project is to improve a map upload process. This is done by adjusting separate tool development project by WMFI called MetaPipe (working title) so that would help map handling. The development of the tool is funded by Finnish Ministry of Education and Culture.

Wikimaps on Maps Mania: Maps in the Age of Cholera & The Vintage Maps of Berlin

The wonderful, prolific and very popular Maps Mania blog  featured the Wikimaps Warper a few times recently, do check them out!

The first interactive map: The Vintage Maps of Berlin uses the Wikimaps Warper.

mapsmania

Keir writes:

This collection of old historical maps of Berlin centers around what is now Museum Island in Berlin.

In the oldest maps you can clearly see the two towns of Cölln and Altberlin, on opposite banks of the River Spree. As you progress through the maps you can see how this area of Berlin’s has changed and developed over the centuries.

Do check out the 11 maps of Berlin from 1652 to today  here: http://homepage.ntlworld.com/keir.clarke/leaflet/berlin.htm

The second post, and interactive map entitled Maps in the Age of Cholera based on an epidemiological map of Leeds (co-incidentally my home).

logo (1)

This was also created by Keir and he writes:

Twenty Years before John Snow famously mapped the locations of cholera victims in Broad Street, London, Robert Baker plotted the deaths of cholera victims in Leeds. Maps in the Age of Cholera is a story map based around Robert Baker’s ‘Sanitary Map of the Town of Leeds’ exploring the 1832 cholera epidemic in the Yorkshire town. Baker never made the link between cholera and contaminated water. However, in his map and in the accompanying report to the Leeds Board of Health, Baker noted that “the disease was worst in those parts of the town where there is often an entire want of sewage, drainage and paving”.

The map itself uses this Leaflet Story Map plug-in. The Leaflet Story Map library uses jQuery to create a scroll driven story map. The map tiles scheme for Robert Baker’s 1832 ‘Sanitary Map of the Town of Leeds’ comes from Wikimaps Warper.

do go check out the interactive story map here http://homepage.ntlworld.com/keir.clarke/leaflet/cholera.htm

 

 

 

We are here. Where do we go next?

12566284653_18e34b2680_z

Wikimaps participated in 3 different international events in November. This was the time to look back to where we have got to and what kind of opportunities the work has opened.

Creating a maker space for location-­based historical storytelling

Bringing Historical Geodata to the Web
New York Public Library, November 5–7

Wikimaps has become part of an ecosystem of initiatives that aim to open and understand the geospatial cultural heritage captured in the documents held in libraries, archives and museums. We were especially happy to be invited to the event Moving Historical Geodata to the Web organized by New York Public Library. The event gathered together actors from the academia, open source and civil sectors from 3 continents to tackle the flow of historical geodata from printed maps to finally making use of it in cultural applications. The goal was to see where projects overlap, and think of common ways to deal with redundancy.

Common ground was sought after with different exercises during two intensive workshop days. In the end of the days, the participants committed to some common goals.

The Wikimaps project commits

to contribute to the OpenHistoricalMap project as a project companion and establish a seamless workflow from old maps to OpenHistoricalMap. The communities are planning future working methods, and we hope to share some of the tasks together.

Another key element of the Wikimaps roadmap has been to be able to use Wikidata as a gazetteer, a database that can connect place names and geographies and their changes through history and across languages. Humphrey Southall, University of Portsmouth, nominated as Educational Institution of the Year in Wikimania this year, has done pioneering work with testing this in practice with the PastPlace gazetteer. The Pelagios project and others have been investigating the idea of the Spinal gazetteer, a gazetteer that is a reference gazetteer for individual gazetteers created in various research projects and initiatives.

In order for Wikidata to serve as a historical gazetteer we must see that

  • the way places and place names are modelled serve historical gazetteers: the alternative names of places can be limited to date ranges or to a specific context.
  • the granularity of places accepted in Wikidata will serve at least gazetteers (towns, villages, hamlets, neighborhoods as well as rivers, islands, lakes etc.) if not even more detailed geographic entities.
  • all geographic elements on maps would be notable enough to be accepted in Wikidata.
  • there will be a good way to include or link to the historical geographic representation.

While waiting for the collected input of the workshop, you can have a look at the slide deck of the participants’ presentations, the participant bios and Lex Berman’s notes for the event. We also shared our experiences about building the Wikimaps community, which appear on separate slides here.

Wikimaps Expedition

State of the Map, Buenos Aires, November 7–9

State of the Map, the yearly congregation of the OpenStreetMap world, was organized for the first time ever in Latin America. Wikimaps was presenting the idea of the Wikimaps Expedition. The project model will try to fuse practices from the successful GLAM projects into the participatory mapping projects as well as the meaningful humanitarian approaches.

Roddsump

Initially thought to be arranged in the Archipelago Sea, in a bilingual area between Finland and Sweden, the expedition would take many forms. The project would engage GLAMs, experts, wikimedians, mappers and locals in a common effort to research the story of a location.

The work would have a preliminary online phase, during which historical maps, photographs and geodata are uploaded to Wikimedia Commons with the help of volunteers and experts. Wikipedians would edit articles and create new ones on the area. Mappers would do historical mapping and upload historical geodata sets to OpenHistoricalMap.

Locals and project participants would get together on location during the preliminary phase through field trips and other meetings, and finally an expedition would take place: collection events, interviews, editathons, mapping parties and workshops, staying together producing and documenting with all different project participants and partners on location for perhaps a week.

Amazing participation

The highlight of the event was the presentation by the schoolchildren of the Río Chico. Their education is entirely carried out over the web, as there is no local educational staff. They have mapped their little village and had travelled to the event to present the project. I missed it on location, but was able to catch up on video.

Wikipedia TOWN

The guidelines for notability and prohibition of conducting original research in Wikipedia have led to the situation that the enormous energy of people doing research on historical personalities and locations cannot be tapped into.

I familiarized with the Japanese project Wikipedia TOWN, that has also set out to tackle these problems. They contribute to 3 different repositories: OpenStreetMap, Wikipedia for notable topics, and LocalWiki for topics that don’t meet the criteria of Wikipedia.

Näyttökuva 2014-11-27 kello 12.54.42

We are here. Where do we go next?

Iberoconf, Buenos Aires, November 21

Finally, I had the opportunity to pull together thoughts from the previous conferences and present at the Iberoconf Latin American meeting of Wikimedia representatives. Thank you to the organizers of the events!

Devise OmniAuth OAuth Strategy for MediaWiki (Wikipedia, WikiMedia Commons)

Authentication of MediaWiki users with a Rails Application using Devise and OmniAuth

Wikimaps is using a customised version of the Mapwarper open source map georectification software as seen on http://mapwarper.net to speak with the Commons infrastructure and running on Wikimedia Foundations Labs servers. We needed a way to allow Commons users to log in easily.  And so I developed the OmniAuth-MediaWiki strategy gem so your Ruby applications can authenticate on Wikimedia wikis, like Wikipedia.org and Wikimedia Commons.

e0974880-2ef0-11e4-9b51-e96f339fe90c

The Wikimaps Warper application uses Devise – it works very nicely with OmniAuth. The above image shows traditional login with username and password and, using OmniAuth, to Wikimedia Commons, GitHub and OpenStreetMap. After clicking the Wikimedia Commons button the user is presented with this:oauth It may not be that pretty, but the user allowing this will redirect back to our app and the user will be logged in. This library used the OmniAuth-OSM library as an initial framework for building upon. The code is on Github here:   https://github.com/timwaters/omniauth-mediawiki The gem on RubyGems is here: https://rubygems.org/gems/omniauth-mediawiki And you can install it by including it in your Gemfile or by doing:

gem install omniauth-mediakwiki

Create new registration

The mediawiki.org registration page is where you would create an OAuth consumer registration for your application. You can specify all Wikimedia wikis or a specific one to work with. Registrations will create a key and secret which will work with your user so you can start developing straight away although currently a wiki admin has to approve each registration before other wiki users can use it.  Hopefully they will change this as more applications move away from HTTP Basic to more secure authentication and authorization strategies in the future! Screenshot from 2014-09-03 21:08:33

Usage

Usage is as per any other OmniAuth 1.0 strategy. So let’s say you’re using Rails, you need to add the strategy to your `Gemfile` alongside OmniAuth:

gem 'omniauth'
gem 'omniauth-mediawiki'

Once these are in, you need to add the following to your `config/initializers/omniauth.rb`:

Rails.application.config.middleware.use OmniAuth::Builder do
 provider :mediawiki, "consumer_key", "consumer_secret"
end

If you are using devise, this is how it looks like in your `config/initializers/devise.rb`:

config.omniauth :mediawiki, "consumer_key", "consumer_secret", 
    {:client_options => {:site => 'http://commons.wikimedia.org' }}

If you would like to use this plugin against a wiki you should pass this you can use the environment variable WIKI_AUTH_SITE to set the server to connect to. Alternatively you can pass the site as a client_option to the OmniAuth config as seen above. If no site is specified the www.mediawiki.org wiki will be used.

Notes

In general see the pages around https://www.mediawiki.org/wiki/OAuth/For_Developers for more information When registering for a new OAuth consumer registration you need to specify the callback url properly. e.g. for development:

http://localhost:3000/u/auth/mediawiki/callback
http://localhost:3000/users/auth/mediawiki/callback

This is different from many other OAuth authentication providers which allow the consumer applications to specify what the callback should be. Here we have to define the URL when we register the application. It’s not possible to alter the URL after the registration has been made. Internally the strategy library has to use `/w/index.php?title=` paths in a few places, like so: :authorize_path => '/wiki/Special:Oauth/authorize', :access_token_path => '/w/index.php?title=Special:OAuth/token', :request_token_path => '/w/index.php?title=Special:OAuth/initiate', This could be due to a bug in the OAuth extension, or due to how the wiki redirects from /wiki/Special pages to /w/index.php pages….. I suspect this may change in the future. Another thing to note is that the mediawiki OAuth implementation uses a cool but non standard way of identifying the user.  OmniAuth and Devise needs a way to get the identity of the user. Calling '/w/index.php?title=Special:OAuth/identify' it returns a JSON Web Token (JWT). The JWT is signed using the OAuth secret and so the library decodes that and gets the user information.

Calling the MediaWIki API

OmniAuth is mainly about authentication – it’s not really about using OAuth to do things on their behalf – but it’s relatively easy to do so if you want to do that. They recommend using it in conjunction with other libraries, for example, if you are using OmniAuth-twitter, you should use the Twitter gem to use the OAuth authentication variables to post tweets. There is no such gem for MediaWiki which uses OAuth. Existing  Ruby libraries such as MediaWiki Gateway and MediaWIki Ruby API currently only use usernames and passwords – but they should be looked at for help in crafting the necessary requests though. So we will have to use the OAuth library and call the MediaWiki API directly: In this example we’ll call the Wikimedia Commons API Within a Devise / OmniAuth setup, in the callback method, you can directly get an OAuth::AccessToken via request.env["omniauth.auth"]["extra"]["access_token"] or you can get the token and secret from request.env["omniauth.auth"]["credentials"]["token"] and request.env["omniauth.auth"]["credentials"]["secret"] Assuming the authentication token and secret are stored in the user model, the following could be used to query the mediawiki API at a later date.

@consumer = OAuth::Consumer.new "consumer_key", "consumer_secret",
            {:site=>"https://commons.wikimedia.org"}
@access_token = OAuth::AccessToken.new(@consumer, user.auth_token, user.auth_secret)
uri = 'https://commons.wikimedia.org/w/api.php?action=query&meta=userinfo&uiprop=rights|editcount&format=json'
resp = @access_token.get(URI.encode(uri))
logger.debug resp.body.inspect
# {"query":{"userinfo":{"id":12345,"name":"WikiUser",
# "rights":["read","writeapi","purge","autoconfirmed","editsemiprotected","skipcaptcha"],
# "editcount":2323}}}

Here we called the Query action for userinfo asking for rights and editcount infomation.

Maps at the Zürich hackathon

The Wikimedia Hackathon was held in Zürich this year with a focus on maps. As a conclusion, it seems maps will be an integral part of what Wikimedia is, as soon as everything presented is taken into use.

logoWikimaps.org

Erik Möller (Wikimedia Foundation) envisions that Wikimaps.org will be a new project, with a community and repository of it’s own. The space could host tools for creating maps as well as geographic data in all it’s forms.

The Maps namespace

Jon Robson (WMF Engineering, web), Katie Filbert (WMDE, Wikidata) and Derk-Jan Hartman created a MediaWiki extension for maps. It introduces a Map namespace where data for the map is stored in raw GeoJSON and can be edited via a JavaScript map editor interface. It also allows the inclusion of maps in wiki articles via a template.

Näyttökuva 2014-5-19 kello 8.26.23The demo: https://wikimaps-ext.wmflabs.org
The codebase: https://github.com/jdlrobson/WikiMaps

Wikimedia tiles

Having Wikimedia’s own tileserver has finally moved forward. Kai Krueger, Tim Alder and Alexandros Kosiaris (WMF, Engineering, operations) have worked to set up a test environment, that shows the base map in several languages. The styles priority list is currently: Mapnik-default, -nolabels, hikebike, black&white, WikiMiniAtlas, multilingual map and hillshading.

Vector rendering is high on the wish list. It would be ideal for historical maps, as there is an infinite number of snapshots of history which should be rendered as tiles. I would not like to be the one who curates important dates!

Wikimaps

Our project Wikimaps might need a new name. When we move under a Wikimaps.org umbrella, we become the Old maps project, the OpenHistoricalMap counterpart in the Wikiworld and the Wikimaps Gazetteer project.

Using the Maps namespace for old maps

In our Old Wikimaps project a key use case for the new Maps extension is the display of the old map layered on top of the current map. It can be used on the map file page on Commons or in the upload wizard, with an interface for positioning the old map on the current map. In this example, the interface is a collection of items from the iD editor for OSM, the Maptcha project and the Wikimedia styles.

Template:Map for map metadata

André Costa (WMSE) worked to finish the first version of the Template:Map, that we want to include in the GWToolset as well as the Upload Wizard. We are still open to influences: Please give feedback! Wikidata will soon take over handling Wikimedia Commons metadata, and these metadata templates will become obsolete. But in the meanwhile, we will upload hundreds of maps with their help and learn about maps metadata.

The information has different layers:

  1. Image data that is similar to data about any image.
  2. Publication and copyright: the cartographer, publisher, printer etc.
  3. Geographic: point or bounding box, place names, time, scale etc.
  4. Object in the archive: materials used, ID, institution data etc.

Wikimaps workflow: Request for Comments

Wikimaps project flowchart

Many suggested that we formulate the Wikimaps workflow into an RfC. Please share your thoughts about this draft, and prepare to discuss about the actual document. (link to be added)

Wikimaps Atlas

Näyttökuva 2014-5-20 kello 0.16.09

The Wikimaps Atlas team is half way through their individual engagement grant. They are producing a scripting environment to recreate all Wikipedia hand-made maps.

Hackathon - Zürich - 2014 - 5

The Wikimaps Atlas team Arun Ganesh (left) and Hugo Lopez (sitting) discussing with Jakub Kaniewsky. CC-BY-SA 3.0 Pakeha, Wikimedia Commons

Bridging between projects

A wealth of maps projects were presented:

Beat Esterman (WMCH) rectified old Zürich maps and provided us with a lot of valuable user testing with the Warper.

Simone Cortesi (WMIT) also had a set of Italian maps to rectify.

CG_039

Petr Pridal from Klokantech produces a georeferencing environment that is used by many memory institutions and the map portal OldMapsOnline. Jakub Kaniewsky has produced Sharemap, a toolset for working with maps, that also features map rectification. We discussed the interoperability of data produced in these environments.

Tim Alder has created a map view for showing items in specific classes in Wikidata. See the tool in: http://tools.wmflabs.org/wp-world/wikidata/superclasses.php?lang=en.

Näyttökuva 2014-5-19 kello 10.24.17

Cultural heritage items in Wikidata plotted on a map. Wikidata superclasses by Tim Alder.

The Reasonator is a creation by Magnus Manske, an original creator of MediaWiki. It is an environment to test Wikidata capabilities that are not yet in production. The latest addition to the toolpack has been the display of Wikidata items within a certain radius from a point on the map. You can follow the new features in Reasonator and Wikidata in Gerard Meijssen’s blog Words and What Not.

Näyttökuva 2014-5-19 kello 14.26.45

Reasonator showing all Wikidata entries within a radius from a coordinate point.

Collaboration between Wikimedia and OpenStreetMap

Simon Poole, the chair of OpenStreetMap Foundation attended the hackathon, and lead a group of people on a tour around Zürich. It was a mapping party: We collected house numbers by sneaking around houses, marked forgotten details and corrected errors made by ignorant German mappers!

There were many pressing topics to be discussed between OSM and Wikimedia, and the presence of the new ED of the Wikimedia Foundation Lila Tretikov made the event feel like the Davos of open knowledge.

Hackathon - Zürich - 2014 - Lydia Pintscher and Lila Tretikov

Lydia Pintscher, Product Manager for Wikidata and Lila Tretikov, Executive Director of the Wikimedia Foundation getting familiar with data issues. CC-BY-SA 3.0 Ludovic P. Wikimedia Commons

The most important issue is to solve the incompatibility in licensing between the Wikimedia and the OpenStreetMap projects. Even if it is possible to combine the projects through skillful linking, it is not easy for a volunteer to navigate the differences. We are excited to see the enthusiasm in both projects to create something great together and wait to see what the legal teams in the organizations will come up with.

Tracking crossover projects

Quim Gil (WMF, Engineering community) has created a page to collect crossover projects between Wikimedia and OpenStreetMap. A Wikimedia Tech Talk around maps is also planned.

More maps, different maps

Using OSM data on a map in Wikipedia is only one of the many use cases that are out there. We also have the OpenHistoricalMap database to map to Wikimedia data. Through weighing several options, we promote the solution where the OHM database is kept separate from Wikidata, and items are mapped against each other only when needed. Geographic contributions are made to the OHM database and further content in Wikidata. If this sounds obscure, let’s discuss more in the RfC!!

Tim Alder outlined a new proposal to store “ephemeral geodata” in an instance of the OSM toolstack, the Open-Wikidata-map. This means fuzzy features such as climate regions, habitats of animals and thematic features of all kinds. Mikel Maron has proposed such infrastructure in a recent talk in SotMUS OpenStreetMap as Infrastructure.

The OpenSeaMap is another specialized geo database, that could be linked across to Wikipedia articles about seas, rivers, water sports and shipping affairs.

More proposals

Tim Alder also proposed to initiate a network of local OSM/Wikimedia ambassadors in as many countries as possible. They could create projects and organize events, and work in collaboration with the forthcoming Maps & Geo Team at Wikimedia Foundation.

The authentication across projects through OAuth should be put into action.

Simon Poole mentioned the idea to collect aerial imagery, both user-generated and open data. Tim Alder reminded it would be a natural continuation after WMDE’s support for OpenGeoServer.

Andy Mabbett suggested the use of crowdsourcing games and bots for adding Wikipedia links to OpenStreetMap objects.

Going beyond maps

Dan Andreescu (WMF Analytics) worked on a visualization framework that overlaps a bit with the Maps namespace. Have a look at the famous Napoleon flow map by Charles Joseph Minard created by the extension!

Näyttökuva 2014-5-16 kello 16.02.06

Click the image for the original visualization

Näyttökuva 2014-5-16 kello 16.30.42

Discussion page about open datasets

Open data worldsThe event gathered many open data activists, and there were many discussions about how and where to store open data within the Wikimedia family. David Cuenca created a page to answer (or – in fact – ask) some of those questions.

Wikimedia Hackathon Zürich 2014 Contributions

Microcontributions, attracting new editors. Thiemo Mättig CC-BY-SA 3.0

What A Summer This Will Be

My name is Jaime Lyn Schatz and through luck and skill and more than a little chutzpah, I am a newly minted Gnome/FOSS OPW intern on the OpenHistoricalMap Project. I will be working with a terrific team of engineers[1] who are volunteering their time to this project, including Robert Warren, who will be my mentor through this journey. I feel deeply honored and incredibly lucky to have this opportunity.

The Wikimaps project seeks to draw together data from the OpenStreetMap and OpenHistoricalMap projects to enable users to view maps backward through the 4th dimension: time. (Allowing users to view maps that reach forward in the 4th dimension is, sadly, out of the scope of this project. 😉 )

The entire project has three main modules:

1. Enhance the iD, the Javascript map editor, and The_Rails_Port, the OHM backend, so that a Javascript time/date slider can be added to control the time period that is of interest.

2. Enhance the iD and The_Rails_Port so that meta-data hooks are added to the code that allow for custom deployments of both software. This will allow multiple interfaces to be generated from the same data source.

3. Modify Mapnik, the software that renders the map images, to handle starting and ending dates for maps shown.

My next steps will be to get up to speed on the nitty-gritty of the iD and The_Rails_Port code bases and to develop a Minimum Viable Product for the time slider. Stay tuned for updates!

[1] Robert Warren, Tim Waters, Sanjay Bhangar, Jeff Meyer and Susanna Ånäs (Project Leader).

opw-logo

January Hangouts: Designers & Developers

The Wikimaps Creative Communities met online on the 7th January. It was the second event for the Designers & Developers community and the Wikimaps Nordic community met online for the first time.

Designers & Developers

Arun Ganesh had prepared a presentation of the Wikimaps Atlas for the event, but technical obstacles prevented it from happening. The project is a newly funded Wikimedia IEG project. Arun and Hugo Lopez will create a set of up-to-date maps and tools for easily creating maps on demand. They will work on Wikimedia-specific map styles, and look after the implementation of the maps in the Wikipedias. We’ll be hearing more from them!

Type in a different language code in the input box, hit tab and see what happens. Project by Arun Ganesh.

In the discussion we aimed at framing the scope of the Wikimaps tools project right. We had two key topics that we wanted to tackle:

  1. How shall we deal with the integration of the Wikimaps tools in Wikimedia
  2. We must start with use cases to come up with the most important tasks

Talking about cropping

Dan Michael O. Heggø told briefly about his experiences with OAuth. This way, the Wikimaps tools could be external to MediaWiki, and the user could authenticate in the external tool using their Wikimedia credentials. Dan has created a tool for cropping images in the Commons, the Crop Tool.

The crop tool in your Wikipedia toolset

The discussion drifted to cropping or not cropping maps for example when wanting to stitch several map sheets together. For this, we have a common understanding: You must not crop a map before uploading it to the Commons! You can make another cropped copy of the map. A special template can be made to maintain the connection between the images. Or you can use the cropping tools in the Warper, to mask undesired areas. We made note of the need to highlight different areas on the map image, such as the legend, or the scale.

We noted that images or maps can already be annotated in Wikimedia Commons, using the ImageAnnotator. There are other great projects dealing with map annotations, like the MapHub project and the consequent Annotorious project.

This again lead to a discussion about identifying and collecting place names on a map – or using place names as a means of roughly geolocating maps. There are academic and governmental place name repositories, that could all be taken advantage of. We are touching this topic in the Wikimaps project further along our roadmap, in the Wikimaps Gazetteer project. Even though it is not in the making yet, ideas about it are more than welcome.

Talking about use cases

As a commentator put it in the Etherpad for ideas about Wikimaps Warper: What’s it for? Who’s going to use it? What are the key things users want it to do?

Rectifying maps in Wikimedia Commons is a facility many people may be interested in. The users may be archives, who want to have volunteers rectify their maps, or they may be historical mappers who look for appropriate old maps to use as information source. And many others.

We can see that there are users that are interested in the pixel – often the case of the archives – and others that are interested in data. For example, you can calculate travel times along ancient routes if you have extracted roads from the map images as vector data. See for example http://omnesviae.org/ , http://orbis.stanford.edu/ and http://vici.org/.

The maps in the scope of the Wikimaps may also cover undermapped areas of the world, as brought up by Jaakko Helleranta, who works with Humanitarian OpenStreetMap in Nicaragua. Tim Waters showed us http://maps.nypl.org/relief, a version of the MapWarper that was used to georeference maps of Haiti in the earthquake relief.

You can join defining the use cases in the Wikimaps Design page. And please, ask for editor rights here to tell about your experiences or projects with historical maps. Comments and discussion are welcome! You can also read the meeting minutes.

Thank you for participating in the January Hangout Jan Ainali, John Erling Blad, André Costa, Tom Fish, Arun Ganesh, Harald Groven, Jaakko Helleranta, Yuwei Lin, Dan Michael Olsen Heggø, Pekka Sarkola, Manuela Schmidt, Rob Warren and Tim Waters!