Rsync examples

#TrueNAS
sleep 5
rsync -avxzzhSP –stats –delete –rsh=ssh /mnt/owncloud-data root@truenas.domain.tld:/mnt/tank0/backup/

sleep 5
ssh root@truenas.domain.tld chown -R user:user /mnt/tank0/backup/

#Linux
sleep 5
rsync -avxzhSP –stats –delete –rsh=ssh /mnt/owncloud-data root@linux-server.domain.tld:/mnt/backup/

#Google Drive
# Prereqs see https://www.more-it.de/mount-google-drive-locally-to-a-linux-machine/
sleep 5
rsync -avxzh –progress –stats –inplace –delete /mnt/owncloud-data/ /mnt/google-drive/

Google location history in Elastisearch

Prerequisites:
Thanks to Kevin Dwyer for the esloader.py script, which demonstrates the usage of the Elasticsearch bulk API in a simple way.

See his Github repository tracky – https://github.com/dwyerk/tracky !

Navigate to Google Maps Timeline with a desktop browser.

Bottom right you find a gear to access your settings.

Choose download a copy of all my data.

You will be faced to a menu, where you can download data from all the Google services.

There untick everything but the location history and confirm the download.

Have patience as it can take several hours until the export hast been created.

You will be notified via mail – download the archive then.

Extract the archive to a desktop pc and you will find one big json-file containig you whole location history (beside subfolders with daily details). It is named like „Location-History.json“ in German exactly „Standortverlauf.json“.

Beside the json-files you will also find a good explanation of the data you got. See the html-file, that is contained in the archive.

Copy this „Locationhistory.json“ respectively „Standortverlauf.json“ file over to your Elasticsearch box.

Navigate to https://github.com/dwyerk/tracky.
At least copy over the esloader.py script and adjust it to your individual settings (Elasticsearch instance, index names, index mapping – mostly self-explanatory). See the code also below the pictures.

Then run the esloader.py with your json-filename „Locationhistory.json“ respectively „Standortverlauf.json“ as parameter.

Attention: Importing a huge amount of geo point can take a while. For about millions of entries you’ll have to wait some minutes or so.

After that you can start to visualize.

Some sample data from my last ten years:

Our trip to the sea with slightly decreasing altitude ; )

import ujson as json
from argparse import ArgumentParser
from datetime import datetime

from elasticsearch import Elasticsearch
import elasticsearch.helpers as helpers
elastic_url = 'localhost:9200'
es = Elasticsearch(elastic_url)

index_name = 'denise-manual'
mapping = {
    "properties": {
        'accuracy': {
            "type": "integer"
        },
        "activity": {
            "type": "nested",
            "dynamic": False,
            "properties": {
                "activity": {
                    "type": "nested",
                    "dynamic": False,
                    "properties": {
                        "confidence": {
                            "type": "integer"
                        },
                        "type": {
                            "type": "keyword"
                        }
                    }
                },
                "timestampMs": {
                    "type": "keyword"
                }
            }
        },
        "point": {
            "type": "geo_point"
        },
        'latitude': {
            "type": "double"
        },
        'latitudeE7': {
            "type": "double"
        },
        'longitude': {
            "type": "double"
        },
        'longitudeE7': {
            "type": "double"
        },
        'timestamp': {
            "type": "date"
        },
        'timestampMs': {
            "type": "keyword"
        }
    }
}

arg_parser = ArgumentParser()
arg_parser.add_argument("input", help="Input File (JSON)")
args = arg_parser.parse_args()
locations = json.load(open(args.input))['locations']

es.indices.delete(index=index_name, ignore=404)
es.indices.create(index=index_name)
es.indices.put_mapping(index=index_name, body=mapping)

actions = []
for i, location in enumerate(locations):
    location["timestamp"] = datetime.fromtimestamp(int(location["timestampMs"]) / 1000)
    location["latitude"] = location['latitudeE7'] / 10000000
    location["longitude"] = location['longitudeE7'] / 10000000
    location["point"] = [location["longitude"], location["latitude"]]
    actions.append({
        "_index": index_name,
        "_id": i,
        "_source": location
    })

helpers.bulk(es, actions)