Multi-Domain Docker Containers

Use case

We have several server applications in the same development environment, each application is bundled in a Docker container, e.g: "Container A" and "Container B".

With Docker those applications have the same IP address. One way to differentiate and access to an specific application is exposing different ports.

/galleries/docker-multidomain/ip.thumbnail.png

Containers exposing the same IP address and different ports

But that solution is a little bit confusing, does 8080 mean we are accessing to "application A"?

It would be simpler and easier to remind something like:

/galleries/docker-multidomain/domain.thumbnail.png

Accessing applications by domain name

Get that extra semantic value is much simpler than I thought at the beginning and you will see below.

How to Configure Multi-Domain Reverse Proxy

I said it is easy, because we almost have to do nothing, another container will do it for us, especifically we are going to use nginx-proxy, it will automatically generate the required NGINX configurations.

So, we will have 2 applications + 1 proxy, that is 3 containers.

Note

You can download the full example at https://github.com/carlosvin/docker-reverse-proxy-multi-domain

/galleries/docker-multidomain/proxy.thumbnail.png

3 containers, 2 applications + 1 proxy

Example Project Structure

  • docker-compose.yaml (Main configuration file describing architecture in previous picture)
  • a (Application A directory)
    • Dockerfile (Container A configuration file)
  • b (Application B directory)
    • Dockerfile (Container B configuration file)

Ver proyecto.

Architecture Configuration (docker-compose)

The relationships between containers is the most interesting part in this example.

docker-reverse-proxy-multi-domain/docker-compose.yaml (Source)

a:
  build: a
  environment:
    VIRTUAL_HOST: a.domain.com
  restart: always

b:
  build: b
  environment:
    VIRTUAL_HOST:  b.domain.com
  restart: always

nginx-proxy:
  image: jwilder/nginx-proxy
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro

  restart: always
  privileged: true
  • Lines 4 and 10: we configure the domain name for each application.
  • From line 13 there is proxy configuration (copy/paste part).
  • In lines 2 and 8 we tell docker-compose has to build Docker images within specified directory. For example, in line 2, we are saying that docker-compose has to build a Docker image using ./a/Dockerfile file.

Application Image Configuration

docker-reverse-proxy-multi-domain/a/Dockerfile (Source)

FROM httpd:2.4
RUN echo "<html><body><h1>A</h1>App A works!</body></html>" > /usr/local/apache2/htdocs/index.html

Line 1: We import an image with an apache server.

Line 2: It serves a file that prints "Host A" as default page.

The configuration for application B is pretty much the same:

docker-reverse-proxy-multi-domain/b/Dockerfile (Source)

FROM httpd:2.4
RUN echo "<html><body><h1>B</h1>App B works!</body></html>" > /usr/local/apache2/htdocs/index.html

Adding domain names to your development environment configuration

In Linux we just have to map the local address to domain names you have chosen, in the example a.domain.com and b.domain.com.

1
2
3
4
5
#/etc/hosts
127.0.0.1             localhost.localdomain localhost
::1                 localhost6.localdomain6 localhost6
127.0.0.1   a.domain.com
127.0.0.1   b.domain.com

I just added 4 and 5 lines.

Everything ready!

Now we just have to test the example:

docker-compose build
docker-compose up

The 3 containers are running now.

So we can open our favourite web browser and go to a.domain.com. It will show App A works!. If we go to b.domain.com then we will see App B works!.

/galleries/docker-multidomain/a.screenshot.thumbnail.png

a.domain.com

/galleries/docker-multidomain/b.screenshot.thumbnail.png

b.domain.com

Note

In most of Linux distros you will need privileges to run Docker commands (sudo).

Rust web frameworks comparison

I'm doing some experiments with Rust because it is a language that promises to be as fast as C/C++, but safer in regards to memory management. Essentially, it doesn't allow the developer to do "bad things" with the memory like: forgetting release the memory that is not going to be used anymore or release memory if the developer is not the owner. In such scenarios, Rust won't compile.

Just for learning I've started a small project that offers a REST API, so I've started looking for frameworks to ease/speed up the development. I've found a Rust web frameworks comparison: https://github.com/flosse/rust-web-framework-comparison

Convert files formats: Windows to Unix

If you are developing from a Windows environment to a Unix target environment, most likely you have had this issue: You install source files in Windows format in your Unix environment.

There is a way quite simple to convert all your files from Windows to Unix format:

find . -type f -print0 | xargs -0 dos2unix

I got it, of course, form http://stackoverflow.com/questions/11929461/how-can-i-run-dos2unix-on-an-entire-directory

REST URLs

First time I designed a REST API I made several mistakes, of course. Following I'm going to explain common mistakes and what I've learned about REST URL with examples.

REST Basics

  • Using URLs for get resources.
  • Using verbs for modify resources.
  • The verbs are provided by the HTTP protocol.
  • The verbs have a direct equivalency with CRUD [1].
  • To access to an existent resource we need an identifier.

REST Verbs

POST
Create new resources.
GET
Read already existing resources.
PUT
Update already existing resources.
DELETE
Delete already existing resources.

It is clearer in the following table

REST Verb CRUD Action Resource must exist
POST Create No
GET Read Yes
PUT Update Yes
DELETE Delete Yes

Accessing to Resources

A resource is what we want to get. For example, a car.

To be able to get a car, that information is not enough, you can't go to your car dealer and ask for whatever car, you have to specify which one you want:

Good morning. I'd like to have a Fiat Bravo 1.9 Emotion 120CV.

In this manner the sheller knows which one is.

"Fiat Bravo 1.9 Emotion 120CV" is the identifier.

Transferring the example to REST APIs:

GET   http://cardealer.com/api/cars/fiat-bravo-19-emotion-120cv

Now our API can supply the car info.

This is a very simple example, but actually when we access to a specific resource, we have to use something to identify it, a common and recommendable practice is use UUID.

GET  http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

But our API, like a shop, it hasn't to be so strict. We can ask for cars with several features:

Good morning, I want a Fiat Bravo.

Then, the dealer kindly will show you all Fiat Bravo he has available. Let's see how API says that.

GET  http://cardealer.com/api/cars/?brand=fiat&model=bravo

API will return all cars with Fiat brand and Bravo model.

Brand and model are so called query parameters.

As you might already notice, to get resource information, we have always used GET verb

Update resources

The API should also support updating resources. Like reading resources, to update a resource we have to specify which resource we want to update, so we again need an identifier.

Before, we wanted to get information (read) and we used GET verb. Now the only difference is the verb.

We want to update so we use the equivalency HTTP verb: PUT.

PUT   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

Actually something else is missing, we have to say what thing of the car we want to change, for example, let's imagine we want to change the engine power and set it to 100CV.

We have to send the new engine power to following URL http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f through HTTP using PUT verb.

HTTP protocol allows sending data within PUT message, we have to choose a sending format.

We can use JSON or XML or whatever, we only have to ensure that sent format is expected in server side.

Note

Designing a REST API requires select a sending data format.

JSON example:

{ enginePower: 100 }

Delete Resources

Let's imagine that now we are the car dealer and we don't want to shell the Fiat Bravo Emotion 1.9CV anymore (the cce05bee-386b-11e5-a151-feff819cdc9f). We'll keep the URL that identifies the resource, but we change the verb: we don't want to read (GET), we don't want to update (PUT), we want to to delete (DELETE).

DELETE   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

We don't have to supply any additional info, only de verb (DELETE) and the resource identifier.

Create Resources

And the last verb is to create (POST). In this case we don't have to identify the resource, because it still doesn't exist.

POST   http://cardealer.com/api/cars/

But we have to send the data to create the resource.

Following with the example, let's create a new car, so we include the necessary data within POST HTTP message, it is something similar what we did at section Update resources, but we are going to send all required data, not only the engine power.

JSON example:

{
"brand": "Fiat",
"model": "Bravo"
"year": 2010
"doors": 5,
"enginePower": 120,
"version": "Emotion",
"clima": true,
"ac": false,
"fuel": "Diesel"
}

We can delegate on the system to assign a new identifier, or simply send it within the message:

{
"identifier": "cce05bee-386b-11e5-a151-feff819cdc9f"
"brand": "Fiat",
"model": "Bravo"
"year": 2010
"doors": 5,
"enginePower": 120,
"version": "Emotion",
"clima": true,
"ac": false,
"fuel": "Diesel"
}

Collections

All actions we have already explained were actually applied over a cars collection.

But, what happen if a resource has a nested collection?

Continuing with cars example, a car can use a set of engine oils. So the API must allow update, delete or create elements in the set.

Note

For the example we will assume that the oil identifier is the attribute type.

Add an element to collection

System Message: WARNING/2 (<string>, line 180)

Title underline too short.

Add an element to collection
***************************

When we add a car to cars collection, what we do is create a new car, so it is the case of Create Resources.

To add a new engine oil to the car cce05bee-386b-11e5-a151-feff819cdc9f, that already exists:

POST   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

{
"type": "5W30",
"otherInfo": "This is the best oil for this car"
}

If we want to add another one:

POST   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

{
"type": "10W30",
"otherInfo": "This is very good for cold weather"
}

Update a collection item

If we want to update the info of oil 5W30 of car cce05bee-386b-11e5-a151-feff819cdc9f:

PUT   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/5W30/

{
"type": "5W30",
"otherInfo": "This is no longer the best oil for this car"
}

Delete a collection item

To delete an oil 10W30 from car cce05bee-386b-11e5-a151-feff819cdc9f:

DELETE   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/10W30

Read a collection item

To get the oil info 10W30 of the car cce05bee-386b-11e5-a151-feff819cdc9f:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/10W30

List collection items

As we have seen at Read a collection item, we can get the info of every collection element, but we also can get multiple collection elements, sorted, paged and apply typical collection actions.

We can get all supported oils for a car cce05bee-386b-11e5-a151-feff819cdc9f, it is as simple as:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

We can also get sorted items:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?sort_by=type&order=asc

We can ask API to return the first 10 oils for car cce05bee-386b-11e5-a151-feff819cdc9f:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?number_of_elements=10

API can support also pagination:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?page=3&number_of_elements=2

Above request is telling API that returns the page 3 of all oils of car cce05bee-386b-11e5-a151-feff819cdc9f and it has to shown 2 oils per page. If we want to go to next page:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?page=4&number_of_elements=2

All those features are supported by query parameters.

Common mistake

First time I tried to design a API REST I designed an API, but REST.

My main mistake was the URLs design, I added my own verbs skipping HTTP verbs.

For example:

POST    http://example.com/api/cars/ford-focus/delete-oil/5W30

The right:

DELETE  http://example.com/api/cars/ford-focus/oils/5W30

Video Tutorials

These 2 videos help me to understand REST URLs, I encourage you to watch them full:

[1] Create, Read, Update, Delete

C++ Dependency Management: Biicode

I'm interested in building, dependency management, packagin and deployment in regards to software projects. For Java, Scala, Python, and so on is quite easy since there are tools like Maven, Gradle, pip, Sbt, etc. But regarding C++_, the best options I've found are Maven with Nar plugin or Gradle with cpp plugin (incubation).

I knew about Biicode almost 2 years ago, but I've never found time to test it, until today.

How does Biicode work?

Firstly we have to install Biicode.

I've made tiny example project using logging system from Poco library.

I've executed this command to create the project, called bii_log.

bii new carlovin/bii_log --hello=cpp

I've created the project under my Biicode username, just in case I'd like to publish later.

Previous command generates the structure of files and directories, although we are going to focus only on:

blocks/carlosvin/bii_log/main.cpp
blocks/carlosvin/bii_log/biicode.conf

In biicode.conf we are going to configure our dependencies, in this example Poco library.

# Biicode configuration file

[requirements]
    fenix/poco(develop): 0

[parent]
        carlosvin/bii_log: 0

[includes]
    Poco/*.h: fenix/poco/Foundation/include

In [includes] section, we are overriding the path to file headers. If we don't override it we'd had to do something like this:

#include "fenix/poco/Foundation/include/Logger.h"

Thanks to this line, include declarations are going to be clearer, as follows:

#include "Poco/Logger.h"

Easy, now we can start using Poco in our project, e.g:

#include "Poco/FileChannel.h"
#include "Poco/FormattingChannel.h"
#include "Poco/PatternFormatter.h"
#include "Poco/Logger.h"
#include "Poco/AutoPtr.h"

using Poco::FileChannel;
using Poco::FormattingChannel;
using Poco::PatternFormatter;
using Poco::Logger;
using Poco::AutoPtr;

int main(int argc, char** argv) {
        AutoPtr<FileChannel> pChannel(new FileChannel);
        pChannel->setProperty("path", "log/sample.log");
        pChannel->setProperty("rotation", "100 K");
        pChannel->setProperty("archive", "timestamp");

        //AutoPtr<ConsoleChannel> pCons(new ConsoleChannel);
        AutoPtr<PatternFormatter> pPF(new PatternFormatter);
        pPF->setProperty("pattern", "%Y-%m-%d %H:%M:%S %s: %t");
        AutoPtr<FormattingChannel> pFC(new FormattingChannel(pPF, pChannel));
        Logger::root().setChannel(pFC);

        Logger & logger = Logger::get("TestChannel");
        for(int i=0; i<10000; i++){
                poco_information(logger, "This is a info");
                poco_warning(logger, "This is a warning");
        }
        return 0;
}

To compile the project we only have to execute following command:

bii cpp:build

To publish the project and to allow everyone use it as we have used Poco:

bii publish

Besides the ease to use, I like so much the integration with Eclipse with CDT. After "bii cpp:build" execution all files were properly indexed.

I've read also an article about the good integration with CLion: When CLion met biicode.

Software Maintenance

Few days ago at work, I had to fulfill a document where I had to select a type of software maintenance that I was going to apply.

The fact was I had only two choices, it seemed me very weird because during my degree I studied 3 or 4 kinds of software maintenance.

Today I have found my Software Engineering class notes, then the types of Software Maintenance sorted descending by percentage of time spent:

Perfective: Activities to improve or add new functionalities required by the user.
Adaptative: Activities to adapt the system to technological environment changes (hardware or software).
Corrective: Fix defects in hardware or software detected by user running the production system.
Preventive: Activities to ease the future system maintenance.

Build C++ project with Gradle

Introduction

I am more and more worried about building, dependency management and distribution of my projects. I'd like to find a tool unifies those processes with independence of the language. I know several tools those almost fit to what I'm looking for, like I know several tools those almost fit to what I'm looking for, like SCons, Autotools, Ant, Maven and lately Gradle.

I've made several projects with Gradle, but always I was focused in Java and Android projects. In Java projects I've found a Maven replacement, because it is faster, easier and less verbose. About Android projects I suffered the early adoption of Android Studio + Gradle, although currently I think the are more mature and they work fine.

First of all, I have to say: building C/C++/Objective-C projects with Gradle is in incubation phase, although now we can perform advanced tasks like:

  • Generation several artifacts within same project (libraries and executables).
  • Dependency management between artifacts (no versions).
  • Different "flavors" of the same software, e.g: we can generate a “Community” release and other one with more enabled features called “Enterprise”.
  • It allows multi-platform binary generation.

As I said, this plugin is still having limitations although they are working on it: Gradle C++ roadmap. If they achieve it I'll leave Autotools (I'm going to regret saying that).

Read more…

Bases de Datos Embebidas Java: Comparación de Rendimiento

Bases de datos embebidas

Se trata de bases de datos que carecen de servidor, están inscrustadas en la propia aplicación y suelen estar almacenadas en ficheros locales. Esto último unido a que suelen tener un modo de funcionamiento en el que mantienen los datos en memoria hace que puedan tener un rendimiento muy alto.

Eso sí, este gran grado acoplamiento a la aplicación, hace que tengan peor rendimiento cuando se comparten entre varias aplicaciones debido a colisiones de acceso.

Otra ventaja es que no tenemos que encargarnos de mantener y gestionar un servidor de bases de datos.

Voy a hacer una comparativa de rendimiento entre 3 bases de datos embebidas ACID (transaccionales), las NoSQL no entran en esta comparativa que están en otra liga de rendimiento.

Read more…

Java serialization ways: Performance Comparison

Recently I've had to serialize/deserialize some data in Java binary format. Lately I use JSON or XML formats.

I remember that to serialize Java objects they must implement the Serializable interface, but I had also read in Internet other way, implementing the Externalizable interface, then, which interface must I implement? It depends on what you want such as everything in the life.

Read more…