*Working on a climate-change related project I had to recover some positional data from a thematic map: here’s a couple of python snippets I used.*

### The problem

First, let me clearly state the problem whose solution I want to show: “I have this thematic map, I scanned from a book or I downloaded from a site, showing some interesting points. I need to retrieve the geographical cohordinates of these points” (I mean, nobody needs latitude/longitude, but the user possibly wants the state, or the area, or to confront the data with another map, and all these things can be done having cohordinates).

### How to

To exemplify the problem I created a dummy map with Google MyMaps, since I do not have the rights to redistribute the maps I worked on. The map is converted in a 3032 x 2304 jpeg image. The map is basically north western Italy, Switzerland and south-eastern France, with some random points in green:

Next step is isolating and cleaning the image of a “point” in the map:

Now we can start playing with Python. I started a new notebook and loaded some libraries:

```
import numpy as np
import cv2
from matplotlib import pyplot as plt
```

Then I loaded the images:

```
img = cv2.imread('TestMap.jpg')
imgt= cv2.imread('Punto.jpg')
```

And here comes the magic. OpenCV has a method called templatematch, that simply looks for copies of a small image (the template) in a larger one. Different algorithms can be used to do that, and OpenCV brings to you six options:

```
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
for i in range(6):
method=eval(methods[i])
res = cv2.matchTemplate(img,imgt,method)
plt.imshow(res,cmap='gray')
plt.title("Algorithm:"+methods[i])
plt.show()
```

The results are as follows:

I decided that method one could be a good choice, and did a little bit of rescaling on image one:

```
method=eval(methods[0])
res = cv2.matchTemplate(img,imgt,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
res=255*(res+min_val)/(max_val-min_val)
imres=res.astype('uint8')
imres=cv2.bitwise_not(imres)
```

Let’s look for example at a slice of the image in which we know are some points. Here’s what it looks like:

`plt.plot(imres[1063])`

The position the points is quite clear… so we can easily detect points:

```
imgtarget=img.copy()
List=[]
for i in range(1,res.shape[0]-2):
for j in range(1,res.shape[1]-2):
P0=res[i,j]
P1=max([res[i-1,j],res[i-1,j-1],res[i-1,j+1],res[i,j-1],res[i,j+1],res[i+1,j-1],res[i+1,j],res[i+1,j+1]])
if P0>=P1 and P0>100:
# We found a local maximum!
List.append([i,j])
for MyPoint in List:
auxP0=tuple([int(MyPoint[1]),int(MyPoint[0])])
auxP1=tuple([int(MyPoint[1]+103),int(MyPoint[0]+109)])
cv2.rectangle(imgtarget, auxP0, auxP1, tuple([0,0,255]),3)
```

The new image “imgtarget” clearly show that the non overlapping points have been identified:

Let’s look at the content of List. Notice that the point cohordinates I put in list has some offset: this is because the template matching algorithms of opencv put the identification weight on the upper left corner of the template image. The List is made of:

```
[[739, 1987],
[841, 1623],
[893, 1399],
[1053, 2037],
[1063, 1241],
[1111, 2253],
[1125, 1705],
[1214, 1475],
[1321, 1881],
[1351, 1525]]
```

Now… it would be iteresting to look at how good these points are. First of all, we get the cohordinates of three given points on the map. For example three cities:

City | X-image | Y-Image | Lat N | Long E |

Geneve | 789 | 657 | 46.20222 | 6.14569 |

Nice | 1197 | 1941 | 43.70313 | 7.26608 |

Bologna | 2685 | 1545 | 44.49381 | 11.33875 |

To make long story short, you can use these three points to calculate a affine transformation from points on the image to geographic cohordinates. Wonder how precise this approach is? You can measure that (since I created the dummy map, I alread know the precise location of all the points):

Point | Lat-Calc | Lon-Calc | Lat-True | Lon-True |

#1 | 45.95242 | 9.568194 | 45.95546 | 9.56245 |

#2 | 45.74900 | 8.571936 | 45.76035 | 8.56269 |

#3 | 45.64482 | 7.958826 | 45.66062 | 7.94746 |

#4 | 45.34066 | 9.705911 | 45.35264 | 9.69978 |

#5 | 45.31133 | 7.526717 | 45.33364 | 7.51344 |

#6 | 45.23021 | 10.29741 | 45.24058 | 10.29304 |

#7 | 45.19614 | 8.797181 | 45.21350 | 8.78791 |

#8 | 45.01973 | 8.167742 | 45.04103 | 8.15620 |

#9 | 44.81607 | 9.279538 | 44.83300 | 9.27131 |

#10 | 44.75316 | 8.304991 | 44.77454 | 8.29353 |

The average distance between calculated and true point is 1,6 km, with a maximum of 2,3 km (and a minimum of 0.7 km), and the error seems “sistematic” (average is in noth west direction): results are fairly good, and can be possibly improved with more careful mapping of the three base points.

### Conclusion

It is possible to retrieve geographic data from maps stored as images, and results are precise enough to be used for confrontation between thematic maps (for example, in my application I was looking for flooding hazard and it was possible to use the data with good results).