My first DC/OS app

By Ilya Dmitrichenko
April 22, 2016

There had been a lot of excitement in the community around the first open-source release of DC/OS.Here at Weaveworks, the team made sure Weave Scope 0.14 works very well on DC/OS..Among the variety of DC/OS tutorials, I haven’t yet found a...

Related posts

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

Multi-cluster Application Deployment Made Easy with GitOpsSets

Close the Cloud-Native Skills Gap and Use Automation to Boost Developer Productivity with Weave GitOps Release 2023.04

There had been a lot of excitement in the community around the first open-source release of DC/OS.

Here at Weaveworks, the team made sure Weave Scope 0.14 works very well on DC/OS..

Among the variety of DC/OS tutorials, I haven’t yet found a tutorial showing a practical example of an app running on DC/OS, one that would look more like a modern real-world microservices architecture (which is the future, of course). Hence I’ve decided to sort this out!

Please welcome The Pixel Monsterz App! It’s a proper simple single-page 12-factor app, and it generates basic images that look a bit like the monsters from Space Invaders, just as you have seen on Github… It implements the MonsterID algorithm as a service written in Node.js, with a frontend written in Python, it also uses Redis, just like everything else does.

More on how this app works later, but first I’m going to go through the basics of getting DC/OS environment bootstrapped locally. I will also install Weave Scope, and use it later to explain how things work in DC/OS and how you could troubleshoot issue when you will be porting your own app to DC/OS. I will go through the Marathon definition of the app and how microservices connect to each other, using Weave Scope to visualize it and explore the infrastructure without switching away from the Weave Scope UI context.

DC/OS Installation

For the purpose of development I’ve decided to install DC/OS locally using Vagrant. There are other options, and AWS install worked very nicely last time I tried a few days ago, but I just didn’t want to pay for running some cloud instances this time.

Once Vagrant has provisioned the DC/OS cluster, I was able to access the shiny DC/OS dashboard at http://m1.dcos/, where I found Marathon running along with all other components reporting good health. Winner! This only took about 15 minutes.

Obtain and Configure CLI Tools

The dcos command is really quite easy to use. You can install via virtualenv, which is very well documented.

Once installed, set the URL:

<code>dcos config set core.dcos_url http://m1.dcos
</code>

Now you will need to login, please type the following command and follow the instructions it prints out.

<code>dcos auth login
</code>

Get Weave Scope Packages

We have already documented how to install Weave Scope on DC/OS using the UI, so I thought I’ll install it via CLI this time.

There are two packages to install, one for the UI app and one for the probes.

First, I installed the Scope UI app:

<code>dcos package install weavescope
</code>

I can now install weavescope-probe package, but I have to pass the number of Mesos slaves…

Here is what I did with the help of jq command:

<code>printf '{ "weavescope": { "probe" : { "instances": %d } } }' \
$(dcos node --json | jq '. | length') > weavescope-probe.opts
dcos package install weavescope-probe --options weavescope-probe.opts
</code>

Once installed both Weave Scope packages, I’ve been able to access the UI at http://m1.dcos/service/weavescope/.

My App

Ok, now let’s get to it, as I said the purpose of this blog post was to show what it took to deploy a realistic app that consists of a few microservices, and see how DC/OS and Weave Scope will help me with this.

The Pixel Monsterz App is a pretty good match for this, it is fairly simple and uses cool frameworks, Python/Flask and Node.js/Restify, as well as Redis, of course! You might have seen an earlier version of this app, if you have read Adrian Mouat’s “Using Docker”. This app was adapted from MonsterID, which generates unique avatars for signed up users (as seen on Github).

The app generates a new monster avatar in each container, caches it and shows it to each visitor, who hits that particular container. It also picks another random monster, just for fun. The front-end microservice is written in Python/Flask and the monster producer is written in Node.js/Restify. The Flask service uses Redis for caching and also counts all sightings of the monsters. Monsters are counted based on service instances, i.e. the container ID.

Marathon Configuration

Here is the definitions in Docker Compose language, which I hope most of you will find easy to understand:

<code>version: "2"

services:

hello:
image: thepixelmonsterzapp/hello
ports: [ "0.0.0.0:9090:9090" ]

monsterz-den:
image: thepixelmonsterzapp/monsterz-den

redis:
image: redis:3
</code>

I could probably install Docker Swarm framework on DC/OS, but I thought it shouldn’t be hard to translate it to Marathon API directly.

First of all, I figured, I will need to remap ports of all the containers, as I’m not using Weave Net and cannot take advantage of unique and routable container IPs right now. So I will be using Mesos DNS and get Docker to do port re-mapping. I am not a big fan of this, but it seems feasible for a small app like this.

Before I proceed in breaking down the definition of this app I wrote for Marathon, I’d like to point out that I am going to use the following attributes for all of the containers.

<code>{
...
"container": {
"type": "DOCKER",
"docker": {
...
"forcePullImage": true,
"network": "BRIDGE",
...
}
}
}
</code>

The key attribute I’d like to highlight is forcePullImage, it will allow me to quickly update the app by restarting it, when I’ve push new image revisions to Docker Hub. I also have a personal preference to use bridge network.

I will also use Marathon groups, to represent the hierarchy of these microservices and make it easier to manage. Below is the outline of this hierarchy.

<code>{
"id": "/monsterz",
"groups": [
{
"id": "/monsterz/apps",
"apps": [
{ "id": "/monsterz/apps/hello", ... },
{ "id": "/monsterz/apps/monsterz-den", ... }
]
},
{
"id": "/monsterz/data",
"apps": [{ "id": "/monsterz/data/redis", ... }]
}
]
}
</code>

Usage of groups impacts the DNS names of services, with the above hierarchy DNS records will look like this:

  • hello-apps-monsterz.marathon.mesos/monsterz/apps/hello
  • monsterz-den-apps-monsterz.marathon.mesos/monsterz/apps/monsterz-den
  • redis-data-monsterz.marathon.mesos/monsterz/data/redis

This makes sense, although I’ll have to remap the ports, but I’d rather use Weave Net… I promise to show how much easier it gets with Weave Net next time, it’s so much nicer for proper big apps with dozens of microservices.

It’s easier to describe this bottom up, let’s start with Redis:

<code>{
"id": "/monsterz/data/redis",
"cpus": 0.5,
"mem": 64,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "redis:3",
"network": "BRIDGE",
"forcePullImage": false,
"portMappings": [{
"containerPort": 6379,
"hostPort": 18002,
"protocol": "tcp"
}]
}
}
}
</code>

The main part is the portMappings array, which will reflect the values of environment variables in the Flask service. For the rest, I just need to specify some sensible values for mem and cpus.

Next up is the Node.js service. Very much like Redis, it is self-contained and doesn’t make any outbound connections, so all I care about is the portMappings object.

<code>{
"id": "/monsterz/apps/monsterz-den",
"cpus": 0.1,
"mem": 32,
"instances": 2,
"container": {
"type": "DOCKER",
"docker": {
"image": "thepixelmonsterzapp/monsterz-den:latest",
"network": "BRIDGE",
"forcePullImage": true,
"portMappings": [{
"containerPort": 8080,
"hostPort": 18001,
"protocol": "tcp"
}]
}
}
}
</code>

Finally, my Flask service, it’s also simple, but has extra attributes, including the environment variables and Marathon labels.

<code>{
"id": "/monsterz/apps/hello",
"cpus": 0.1,
"mem": 32,
"instances": 2,
"container": {
"type": "DOCKER",
"docker": {
"image": "thepixelmonsterzapp/hello:latest",
"network": "BRIDGE",
"forcePullImage": true,
"portMappings": [{
"containerPort": 9090,
"hostPort": 18000,
"protocol": "tcp"
}]
}
},
"labels": {
"DCOS_SERVICE_NAME": "monsterz",
"DCOS_SERVICE_SCHEME": "http",
"DCOS_SERVICE_PORT_INDEX": "0"
},
"env": {
"MONSTERZ_DEN_HOST": "monsterz-den-apps-monsterz.marathon.mesos",
"MONSTERZ_DEN_PORT": "18001",
"REDIS_HOST": "redis-data-monsterz.marathon.mesos",
"REDIS_PORT": "18002",
"ENV": "DEV"
}
}
</code>

First, I’ll describe what these labels mean. Essentially this is what enables the app to be accessible via DC/OS Admin Router at the URL shown below.

  • /service/${DCOS_SERVICE_NAME}/ (in my case: http://m1.dcos/service/monsterz)

The DCOS_SERVICE_SCHEME label is self-explanatory, and the DCOS_SERVICE_PORT_INDEX label refers to the first port declared in portMappings.

The most critical part of the above snippet of JSON, are the environment variable (env). Those environment variables link the Redis and the Monsterz Den services to the frontend Flask service.

Service Lookup Logic Implementation

To make the app work in the configuration above, I needed to handle MONSTERZ_DEN_HOST, MONSTERZ_DEN_PORT, REDIS_HOST and REDIS_PORT environment variables in the Python code. For that I’ve implemented a very simple dictionary which is initialised based on whether those exact environment variables are set, otherwise some well-known defaults are picked.

<code>services = {
'redis': {
'host': getenv('REDIS_HOST', 'redis'),
'port': getenv('REDIS_PORT', '6379'),
},
'monsterz-den': {
'host': getenv('MONSTERZ_DEN_HOST', 'monsterz-den'),
'port': getenv('MONSTERZ_DEN_PORT', '8080'),
},
}
</code>

That dictionary is static, as environment variables won’t change. If I had more services, I could probably generate this, but I only have two services to care about.

For Redis client I simply lookup redis key and unpack the result as keyword arguments:

<code>cache = redis.StrictRedis(db=0, **services['redis'])
</code>

And to make an HTTP request to the Monsterz Den service, I do a somewhat similar thing:

<code>monster = 'http://{host}:{port}/monster/{name}?size=80'.format(name=name, **services['monsterz-den'])
...
r = requests.get(monster)
</code>

Having this simple service-lookup logic allows me to work on the app with Docker Compose on my Mac, as well as deploy it on any orchestrator with very simple configuration.

Deploy all the Things

Full JSON configuration manifest is on Github.

You can deploy it on DC/OS with a single command:

<code>curl --silent --location \
https://raw.github.com/ThePixelMonsterzApp/infra/master/marathon-app.json \
| dcos marathon group add
</code>

Now you can navigate to Marathon UI to monitor the progress of this deployment. This screenshot below shows all microservices are running.

You can access the app at http://m1.dcos/service/monsterz/, and it should look like show on the next screenshot.

Using Weave Scope

Weave Scope should help a lot in understanding how some of concepts described above work.

First of all, if you select the frontend node (/monsterz/apps/hello) and refresh the app itself in another window a few times, you will see that it has ephemeral outbound connections to both of Monsterz Den instances and another persistent connection to Redis. By selecting ‘Memory’ tab at the bottom of the window, I can also confirm that the usage is within limits I have specified.

Next, I’ll open the console of the container and by refreshing the app once more, I can see the HTTP request being logged to the console.

By clicking on the shell icon, I can get access to the shell inside the container a perform a ping test to ensure there is connectivity to the Redis and Monsterz Den services. I can also use curl to fetch a monster image and check all works as expected.

Conclusion

I hope you will find this information useful while working to get your apps running on DC/OS. You might also like using Weave Scope to explore, troubleshoot and interact with the system while making your way to production with DC/OS, or simply evaluating it against other solutions (Weave Scope works with most popular ones, by the way). If you do make good use of Weave Scope, please let us know!

dcos-scope-guide
scope-release-post


Related posts

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

Multi-cluster Application Deployment Made Easy with GitOpsSets

Close the Cloud-Native Skills Gap and Use Automation to Boost Developer Productivity with Weave GitOps Release 2023.04