Showing posts with label Cloudant. Show all posts
Showing posts with label Cloudant. Show all posts

Thursday, August 1, 2019

Use a Delivery Pipeline to rotate credentials

Job in Delivery Pipeline to rotate keys
In my recent posts I touched on updating credentials for solutions deployed on IBM Cloud Functions or using Cloud Foundry on IBM Cloud. Today, I am showing you how to rotate API keys and passwords for a containerized solution on IBM Kubernetes Service that makes use of a delivery pipeline (devops). I am going to use the app discussed in the tutorial on how to apply end to end security to a cloud application.

Friday, September 14, 2018

Tutorial on how to apply end to end security to a cloud application

Before you head out to the weekend I wanted to point you to a new cloud security tutorial. If you read this at the beginning of your week: What a great start... ;-)

Did you ever wonder how different security services work together to secure a cloud application? In the new tutorial we use
  • IBM Cloud Activity Tracker to log all security-related events. This includes logging in to the account, provisioning or deleting services, working with encryption keys and more.
  • IBM Cloud Key Protect to manage encryption keys. For the tutorial, we generate a root key for envelope encryption of stored files. You could also import your own root key (bring your own key, BYOK). We use the root key to create encrypted buckets in the IBM Cloud Object Storage service.
  • IBM Cloud Object Storage (COS) service to produce expiring links to individual files. The links can be shared with others and expire after the set amount of time, so that the file cannot be accessed thereafter.
  • IBM Cloud App ID as a wrapper around (enterprise and social) Identity Providers to manage authentication and authorization through a single interface. The App ID service can be directly integrated with Kubernetes Ingress.
  • IBM Cloud Container Registry as a private image registry from which we deploy the application as container into a Kubernetes cluster (IBM Cloud Kubernetes Service). The container registry includes a Vulnerability Advisors that scans for and assesses container vulnerability and then recommends fixes.
Best of all, the code for the security tutorial is shared on GitHub in this repository. If you are in a hurry, it even allows you to deploy the full Node.js in Docker application and its services with the press of a button via toolchain.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Thursday, May 24, 2018

How to pack serverless Python actions

Serverless access to Db2 and GitHub
For my tutorial on automated data retrieval and analytics, I use IBM Cloud Functions to automatically fetch GitHub traffic statistics once a day. It is implemented as a serverless Python action. Because some Python packages are needed, the question was how to pack and create the action. In this blog, I share some of my experiences.

Monday, February 19, 2018

New tutorial: Db2-driven Slackbot

Ever wanted to build a Slackbot, a chatbot integrated into Slack, on your own? I am going to show you how easy it is to integrate Slack or Facebook Messenger with the IBM Watson Conversation service. As a bonus, the bot is going to access a Db2 database to store and retrieve data. The solution is based on IBM Cloud Functions and entirely serverless
Slackbot Architecture
.

Friday, September 29, 2017

App Security Scanner: My Cloud Database App is Secure

Cloud Application Security
Over the past years I have written a couple cloud-based apps. Most of them have a database backened. I know - to a good degree - how to secure the database system. I have some background in secure software engineering. And I trust the cloud providers like IBM to secure the platform and runtime environment. Today, I wanted to get feedback on the overall web app security and tested the Application Security on Cloud service offered in the IBM Cloud Platform. Here is what I did and the results.

Monday, March 20, 2017

IBM Bluemix in Germany, includes dashDB and Cloudant

IBM Bluemix in Germany, in German
Today, I wanted to share some exciting news with you. Most of you know that I am German. Thus, it is terrific to have IBM Bluemix available from Frankfurt, Germany, today. As can be seen on the screenshot on the right, the new Bluemix region is labeled "eu-de".

Having Bluemix Public in Germany is a big step for the IBM Cloud and customers alike. Being located next to DE-CIX means low network latency for German and European customers. Utilizing Bluemix Public in London ("eu-gb") it is possible to deploy applications with high-availability requirements redundantly within Europe. The database-as-a-service offerings dashDB ("DB2") and Cloudant are already available in the Bluemix catalog. More database and analytics services are to follow. You can check out the list of initial services here in the Bluemix Catalog for Germany.

That's all for today. I am back to MY German Bluemix...

Friday, February 24, 2017

Securing Workloads on IBM Cloud - Some Resources

Security Guides for IBM Cloud
Security Guides for IBM Cloud
Recently, I provided you with an overview of security and compliance resouces for IBM dashDB and Cloudant. Today, I want to take a broader view and point you to some good introductory material on security for cloud-based workloads. It consists of an overview of different cloud deployment models and their components. Then it digs into each of those categories and takes a look at how to secure those components and the data.


Friday, February 3, 2017

Security and Compliance for IBM dashDB and Cloudant

Database Security and Compliance
Database Security & Compliance
I often get asked about the security features of IBM dashDB and Cloudant. Both are database services ("DBaaS") offered on IBM Bluemix. Once the security topic is dealt with, compliance-related questions are next. A good chunk of questions can be answered by going over the provided product documentation. Here are the links to get you started on database security and compliance.

Monday, August 22, 2016

Notes on Notebooks, Data, DB2, and Bluemix

Weather Graph in Jupyter Notebook
Some time ago I shared with you how I used open data like weather data and CeBIT statistics to have some fun with Jupyter Notebooks. Notebooks are an old, but now - in the age of Cloud Computing - trendy way of collaborative data exploration (see this article for some background reading). Notebooks consist of so-called cells, places to put code, instructions, text and more. Cells can hold Markdown-formatted text, code written in Python, Scala and other languages. It is possible to fetch data from DB2, dashDB and other database systems and process it in the notebook, creating stunning graphics. And with extensions such as RISE (reveal.js IPython Slideshow Extension) those notebooks replace Powerpoint & Co as source for great data-driven presentations. How to use notebooks and DB2 is what I plan to present at the IDUG EMEA 2016 Conference in Brussels later this year.

If you can't wait until then I recommend to take a look at these recent blog posts on how to get started with Notebooks and data on Bluemix:
In case you have trouble programming your notebooks, head over to Stack Overflow and search for "ipython-notebook" or "jupyter-notebook".

That's it for today with my notes on notebooks.

Monday, April 11, 2016

Data Protection, Privacy, Security and the Cloud

Protecting your bits

(This is the first post in a planned series on data protection, security, and privacy related to DB2/dashDB in the cloud and IBM Bluemix)

As a data/database guy from Germany, security and data protection and privacy have been high on my list of interests for many, many years. As a banking customer I would hate it when someone not authorized would access my data. I also don't like to go through the hassle of replacing credit cards, changing passwords, take up a new name (user name only :), or more because a system my data is or was on had been hacked. With more and more data being processed "in the cloud" it is great to know how much effort has been put into designing secure cloud computing platforms, into operating them according to highest security standards, and how international and local data protection standards and laws are followed for legal compliance.

Friday, April 1, 2016

New Bluemix Data Sink Service Tackles Data Overload

IBM today announced a new experimental service for its Bluemix cloud platform that provides Data Sink capabilities to its users, helping companies to tackle data overload scenarios, enhancing data archiving throughput and solving data retention issues. Building on the global network of SoftLayer data centers the new data sink as a service will feature triple redundancy for high performance and increased fault tolerance, and hence tentatively is named DSaaSTR (Data Sink-as-a-Service with Triple Redundancy).

Initially the experimental Bluemix service is free and allows to pipe up to 1 TB (one terabyte) of data a month to the data sink. Customers already on direct network links will be able to utilize the full network bandwith. This gives the opportunity to test the DSaaSTR offer for the following scenarios:
  • The abundance of sensors and their generated data, whether in Internet of Things (IoT) or Industry 4.0 scenarios, leaves companies struggling with data storage. Utilizing the new service they can leverage the DSaaSTR in the cloud to get rid of local data.
  • The more data and data storage options, the more intruders. By piping data to the Bluemix DSaaSTR it will become unavailable for attackers.
  • Local data archives require active data management, enforcement of retention policies, and rigorous disposal. DSaaSTR offers easy choices for data retention and disposal.
  • Many enterprises have learned that even Hadoop Clusters need actively managed storage. DSaaSTR can be configured to be part of a local or cloud-based Hadoop system (hybrid cloud), thus eliminating storage costs and simplifying the overall administration tasks for the cluster.
The new service is made available to select customers worldwide as part of the experimental Bluemix service catalog. If you are interested in signing up please send me an email or get in touch with your regular Bluemix seller or support contact.

Wednesday, March 16, 2016

CeBIT: Goldsmith in the Hybrid Cloud - How to Create Value from Enterprise Data

Gold Nuggets - Data as Gold
Data, data, data. There is a lot of data, data already stored or archived, and data about to be produced, generated, measured, or even missing data. But there is not always value in the accessible data, even though data is considered the new gold. Similar to the real gold and a goldsmith creating jewels, data first needs to be worked on, refined, transformed and made interesting to consumers, turned into information or insight.

Thursday, February 18, 2016

Building a Solution? The Cloud Architecture Center has Blueprints

Cloud Architecture Center
Remember the days when a simple text client and a small database server were the core enterprise solution? These days data flows from various endpoints to data lakes or data reservoirs, data streams are analyzed in real time to trade stacks, prevent fraud, to react to sensor data. How are other companies building their solutions or what are best practices? What products or services can be used? Great that the new Cloud Architecture Center offers blueprints.


Right now the IBM developerWorks Cloud Architecture Center features an architecture gallery where you can filter the available blueprints by overall area like data & analytics, Internet of Things (IoT), Mobile or Web Application. Another filter criterias are by industry or capability, i.e., you could look for sample solution for the insurance industry or a use-case featuring a Hybrid Cloud scenario.
Partial view: Architecture for Cloud Solution

For the selected architecture and solution you are presented with the overall blueprint (as partially shown in the screenshot) and are offered information about the flow, the included components are deployed services and products, and get an overview of the functional and non-functional requirements. Depending on the solution there are links to sample applications, code repositories on GitHub, and more reading material. See the Personality Insights as a good example.

The Architecture Center offers great material for enterprise architects and solution designers and the linked samples and demos are also a good start for developers.

(Update 2016-02-21): There is a new and good overview article with focus on Big Data in the cloud and possible architecture.

Monday, August 24, 2015

Keeping track of "my" DB2

Some years back when talking about DB2, there was only the "I am on the host" or "We are mostly OLTP" distinction. My standard phrase has been "I am a distributed guy". Well, today, when only talking about DB2 for Linux, UNIX, and Windows ("DB2 LUW"), some more description is needed to clearly state what, where, and how the product is used. Here is a small overview of my world of DB2 LUW.


If you have not done so, I would recommend taking a look at the IBM Data Server Manager as fairly new tool to monitor, administrate, and tune DB2. The experience in using the web-based GUI can directly be applied to the administration interfaces of the related DB2 services in the cloud, sqldb and dashDB.

Monday, July 20, 2015

Bluemix: Simple cron-like service for my Python code

Schedule background tasks in Bluemix
This morning I had some time to fiddle around with Bluemix and Python. I wanted to test out creating a cron-like service, i.e., something that runs in the background and kicks off tasks with given intervals or at a given time. For Python there exists a package "schedule" that is easy to set up and use. So how do you create a background service in Bluemix?

The trick is in the file manifest.yml and the Cloudfoundry documentation has all the needed details (Bluemix is built on this open standard). The attribute "no-route" is set to true, indicating that this is not a Web application and we don't need a subdomain name. In addition the attribute "command" is set to invoke the Python interpreter with my script as parameter. Basically, this starts my background task:

 applications:  
 - name: hltimer  
  memory: 256M  
  instances: 1  
  no-route: true  
  command: python mytimer.py  
  path: .  


The cron-like service script is pretty simple. It uses the "schedule" package to set up recurring jobs. I tested it with the Twilio API to send me SMS at given timestamps. You can also use it to scrape webpages in given intervals, kick off feed aggregators, read out sensors and update databases like Cloudant or DB2, and more. See this Github repository for the full source.

 import schedule  
 import time 

 def job():  
 #put the task to execute here  

 def anotherJob():  
 #another task can be defined here  

 schedule.every(10).minutes.do(job)  
 schedule.every().day.at("10:30").do(anotherJob)  

while True:  
   schedule.run_pending()  
   time.sleep(1)  

Tuesday, April 21, 2015

My 10 Minute DB2 App on Bluemix

I needed some fun at the end of the workday today. Why not enhance my own skills, do some "coding", and try out Node-RED on Bluemix again (I performed a "phoney" test in October already). This time, I wanted to see whether a database like DB2 or dashDB is supported as backend.

Node-RED Starter: This application demonstrates how to run the Node-RED open-source project within IBM Bluemix.
Node-RED Starter
Node-RED
storage nodes
To get started I logged into my free trial account on http://bluemix.net. I selected the "Node-RED Starter" from the catalog and created the app skeleton. After the app was staged (the resources allocated, files copied, and the skeleton app started), I could begin wiring up my application logic. My first look on the wiring board went to the list of supported storage nodes. What was new since my last test is support for dashDB and "sqldb" (DB2) as input and output nodes. That is, it is possible to store data in those two database systems (output) and bring data into the app by querying a database (input).

To use DB2 in Node-RED, you need to create a sqldb service in Bluemix first. This meant going back to the catalog, skipping to the Data Management section, selecting the sqldb service, and adding it to my application (see picture below). Thereafter, my application needed to be restaged (restarted), so that it could pick up the properties of the sqldb service.

Monday, November 17, 2014

A quick look at dashDB and a happy SQuirreL

dashDB slogan on its website
This morning I took some time to take a look at dashDB, a new IBM DWaaS (Data Warehouse as a Service) offering. When you go to the dashDB website, you are offered two choices: Use the dashDB service available on IBM Bluemix or use a Cloudant account to add a warehouse to your JSON database. Let me give you a brief overview of what you can do with dashDB and how I connected a local (open source) SQuirreL SQL client to my new dashDB database.


Cloudant Warehousing (dashDB)
dashDB is a cloud-based analytics database ("analytics in a dash")) with roots in Netezza and DB2 with BLU Acceleration. Data is stored in table (rows and columns) format. It is ready to connect all kinds analytic tools, local or cloud-based, and is already set up for geo-spatial data analysis (instructions on how to use the ESRI ArcGIS Desktop are provided). The best is that your regular SQL database/analytic tools continue to work, see below for details.

dashDB: schema discovery
I started my journey by logging into my existing Cloudant account. There, on the dashboard menu is a new item "Warehousing". When clicking on the "New Warehouse" button, you can select the Cloudant databases that you want to import into the warehouse. Because multiple databases can be associated with a Cloudant account or a Bluemix Cloudant service, this step let's you pick the data of choice. After the source data is chosen, the dashDB database is created and so-called schema discovery turns the JSON documents into rows of tables. Thereafter, the data is ready to have analytics applied. That is the time to launch the dashDB control center, another so-called "dashboard".

The welcome screen shows some of the analytic options, e.g., the database is ready to be used with either Cognos, SPSS, InfoSphere DataStage, R scripts, or all of them and more:
Analytis for dashDB: Cognos, SPSS, DataStage, R
SQuirrel SQL client - dashDB connected
Because some time ago I already tested and blogged about a predecessor of dashDB (see here: how to set it up and how to use R), I was more interested in trying out a JDBC-based client with my new cloud-based data warehouse. Included as part of the dashboard are several sections that help you with the application setup. So it was easy for me to obtain the JDBC URL and configure it and the listed userid/password in my local SQuirrel SQL client (it will work in IBM Data Studio and the Optim tool, too). As you can see from the screenshot, the database connection from my laptop to the cloud-based dashDB succeeded. Ready for some SQL.

My lessons learned from testing database queries on the converted data (JSON to relational) will be part of another blog entry. Stay tuned...

Monday, October 13, 2014

Node-RED: Simple "phoney" JSON entries in Cloudant

Node-RED Starter
on IBM Bluemix
Want to automatically store data about who called you on which phone number? Last Friday tried exactly that on IBM Bluemix and was amazed how simple it is, no real programming involved. All I needed was the Node-RED starter boilerplate (icon is on the right) on Bluemix and a new API of my mobile service provider.

The Node-RED boilerplate automatically creates a Node.js runtime environment on Bluemix and installs the Node-RED tool into it. In addition, a Cloudant JSON database is created. Once everything is deployed I opened the Node-RED tool in a Web browser. It offers a basic set of different input and output methods, processing nodes, and the ability to connect them in a flow graph. One of the input nodes is a listener for http requests. They help to react to Web service requests. I placed such http input node on the work sheet and labeled it "phone" (see screenshot).
Node-RED tool on Bluemix

How did I obtain information about callers and the numbers they called? Well, I tapped my friends at a security agency. I used a new free service offering at one of my phone service providers. It is called "sipgate.io" and allows to configure an URL/Web address that is accessed whenever someone calls one of the account's phone numbers (you could have multiple phone numbers). An http POST request is sent to the configured URL and the caller's phone number and the called number are included as payload. In my Node-RED application the "phone" node would answer to this request.

What I needed now was the data processing flow of the Web services request. On Friday I already tweeted the entire flow:



On the left we see "phone" node as http input. Connected to it is the "ok" node which sends an http response back, telling the phone company's Web services that we received the information. The other connected node is a "json" processor which translates the payload (who called which number) into a meaningful JSON object. That object is then moved on to the "calls" node, a Cloudant output node. All we needed was to select the Cloudant service on Bluemix and to configure the database name.
Cloudant Output Node, Node-RED on Bluemix
The magic itself was pressing the "Deploy" button on top of my Node-RED worksheet. It created the Node.js code for my Bluemix app. I was a little bit nervous about testing the app because I didn't code anything, just clicking. For my test I took the phone in my home office and dialed the number of my mobile phone number. The result? A new, nice and shiny, "phoney" JSON document in Cloudant:
"Phoney" record in Cloudant
Almost too easy. Great stuff, but unfortunately no code to share... :) (You can find the flow on Github)

Monday, August 25, 2014

Setting up and using a DB2 in-memory database on IBM Bluemix

[Update 2014-11-04: The Analytics Warehouse service on Bluemix is now called dashDB.]
Last Friday I was on the way back from some customer visits. While traveling in a German highspeed train I used the Wifi service, connected to IBM Bluemix and created a DB2 in-memory database. Let me show you how I set it up, what you can do with it and how I am connecting to the cloud-based database from my laptop.


Unbound DB2 service on Bluemix
The first thing to know is that on Bluemix the DB2 in-memory database service is called IBM Analytics Warehouse. To create a database, you select "Add service" and leave it unbound if you want, i.e., it is not directly associated with any Bluemix application. That is ok because at this time we are only interested in the database. Once the service is added and the database itself created, you can lauch the administration console.

The console supports several administration and development tasks as show in the picture. It includes loading data, to develop analytic scripts in R, to execute queries and link the data with Microsoft Excel for processing in a spreadsheet, and it has a section to connect external tools or applications to the database.
Administration/development task in DB2 BLU console on Bluemix
One of the offered task is very interesting and I twittered about it on Friday, too:



You can set up replication from a Cloudant JSON database to DB2, so that the data stream is directly fed in for in-memory analyses. I didn't test it so far, but plan to do so with one of my other Bluemix projects.

A task that I used is to (up)load data. For this I took some historic weather data (planning ahead for a vacation location), let the load wizard extract the metadata to create a suitable data, and ran some queries.

Uploading data to DB2 on Bluemix

Specify new DB2 table and column names

For executing (simple) selects there is a "Run Query" dialogue. It allows to choose a table and columns and then generates a basic query skeleton. I looked into whether a specific German island had warm nights, i.e., a daily minimum temperature of over 20 degrees Celsius. Only 14 days out of several decades and thousands of data points qualified.







Last but not least, I connected my local DB2 installation and tools to the Bluemix/Softlayer-based instance. The "CATALOG TCPIP NODE" is needed t make the remote server and communication port known. Then the database is added. If you already have a database with the same name cataloged on the local system, it will give an error message as shown below. You can work around it by specifying an alias. So instead of calling the database BLUDB, I used BLUDB2. The final step was to connect to DB2 with BLU Acceleration in the cloud. And surprise, it uses a fixpack version that officially is not available yet for download...

DB:  => catalog tcpip node bluemix remote 50.97.xx.xxx server 50000
DB20000I  The CATALOG TCPIP NODE command completed successfully.
DB21056W  Directory changes may not be effective until the directory cache is
refreshed.
DB:  => catalog db bludb at node bluemix
SQL1005N  The database alias "bludb" already exists in either the local
database directory or system database directory.
DB:  => catalog db bludb as bludb2 at node bluemix
DB20000I  The CATALOG DATABASE command completed successfully.
DB21056W  Directory changes may not be effective until the directory cache is
refreshed.
DB:  => connect to bludb2 user blu01xxx
Enter current password for blu01xxx:

   Database Connection Information

 Database server        = DB2/LINUXX8664 10.5.4
 SQL authorization ID   = BLU01xxx
 Local database alias   = BLUDB2

I will plan to develop a simple application using the DB2 in-memory database (BLU Acceleration / Analytics Warehouse) and then write about it. Until then read more about IBM Bluemix in my other related blog entries.

Friday, July 25, 2014

The Hunt for the Chocolate Thief (Part 2) - Putting IBM Bluemix, Cloudant, and a Raspberry Pi to good use

I am still on the hunt for the mean chocolate thief, kind of. In the first part I covered the side of the Raspberry Pi and uploading data to Cloudant. I showed how to set up an infrared motion sensor and a webcam with the RPi, capture a snapshot and secure the image and related metadata in a Cloudant database on the IBM Bluemix Platform-as-a-service (PaaS) offering. In this part I am going to create a small reporting website with Python, hosted as a IBM Bluemix service.

Similar to an earlier weather project, I use Python as scripting language. On Bluemix, which is based on Cloud Foundry, this means to "bring your own buildpack". I already described the necessary steps which is to tell Bluemix how to create the runtime environment and install the needed Python libraries. So how do I access the incident data, i.e., the webcam snapshots taken by the Raspberry Pi when someone is in front of the infrared motion sensor? Let's take a look at the script:

 import os  
 from flask import Flask,redirect  
 import urllib  
 import datetime  
 import json  
 import couchdb  
   
   
 app = Flask(__name__)  
   
 # couchDB/Cloudant-related global variables  
 couchInfo=''  
 couchServer=''  
 couch=''  
   
 #get service information if on Bluemix  
 if 'VCAP_SERVICES' in os.environ:  
   couchInfo = json.loads(os.environ['VCAP_SERVICES'])['cloudantNoSQLDB'][0]  
   couchServer = couchInfo["credentials"]["url"]  
   couch = couchdb.Server(couchServer)  
 #we are local  
 else:  
   with open("cloudant.json") as confFile:  
    couchInfo=json.load(confFile)['cloudantNoSQLDB'][0]  
    couchServer = couchInfo["credentials"]["url"]  
    couch = couchdb.Server(couchServer)  
   
 # access the database which was created separately  
 db = couch['officecam']  
   
 @app.route('/')  
 def index():  
   # build up result page  
   page='<title>Incidents</title>'  
   page +='<h1>Security Incidents</h1>'  
   
   # Gather information from database about which city was requested how many times  
   page += '<h3>Requests so far</h3>'  
   # We use an already created view  
   for row in db.view('incidents/incidents'):  
     page += 'Time: <a href="/https://blog.4loeser.net/incident/'+str(row.key["id"])+'">'+str(row.key["ts"])+'</a><br/>'  
   
   # finish the page structure and return it  
   return page  
   
 @app.route('/incident/<id>')  
 def incident(id):  
   # build up result page  
   page='<title>Incident Detail</title>'  
   page +='<h1>Security Incident Details</h1>'  
   doc=db.get(id)  
   # Gather information from database about the incident  
   page += '<br/>Incident at date/time:'+str(doc["timestamp"])  
   page += '<br/>reported by "'+doc["creater"]+'" at location "'+doc["location"]+'"'  
   page += '<br/>Photo taken:<br/><img src="/https://blog.4loeser.net/image/'+id+'" />'  
   # finish the page structure and return it  
   return page  
   
 @app.route('/image/<id>')  
 def image(id):  
   #redirecting the request to Cloudant for now, but should be hidden in the future  
   return redirect(couchServer+'/officecam/'+id+'/cam.jpg')    
     
   
 port = os.getenv('VCAP_APP_PORT', '5000')  
 if __name__ == "__main__":  
      app.run(host='0.0.0.0', port=int(port))  


Overview of Security Incidents
The setup phase includes reading in access data for the Cloudant database server. Either that information is taken from a Bluemix environment variable or provided in a file "cloudant.json" (similar to what I did on the RPi). The main part of the script defines three routes, i.e., how to react to certain URL requests. The index page (index()) returns an overview of all recorded incidents, an incident detail page (incident(id)) fetches the data for a single event and embeds the stored webcam image into the generated page, and the last route (image(id)) redirects the request to Cloudant.

 Looking at how the index page is generated, you will notice that a predefined Cloudant view (secondary index) named "incidents/incidents" is evaluated. It is a simple reduce function that sorts based on the timestamp and document ID and returns just that composite key.

Incident Detail: Hadoop involved?
function(doc) {
    if (doc.type == "oc")
       emit({"ts" : doc.timestamp, "id" : doc._id}, 1);

}
Then I access the timestamp information and generate the list as shown in the screenshot above.

The incident detail page has the document ID as parameter. This makes it simple to retrieve the entire document and print the details. The webcam image is embedded. So who got my chocolate? Take a look. It looks like someone who got a free copy of "Hadoop for Dummies" at the IDUG North America conference.

Maybe another incident will shed light into this mystery. Hmm, looks like someone associated to the "Freundeskreis zur Förderung des Zeppelin Museums e.V." in Friedrichshafen. I showed the pictures to my wife and she was pretty sure who took some chocolate. I should pay more attention when grabbing another piece of my chocolate and should more closely watch how much I am eating/enjoying.
Zeppelin Brief seen at robbery

Have a nice weekend (and remember to sign up for a free Bluemix account)!