Choosing a Modern C++ stack

I'm starting a new project in C++, but I've run into a pair of questions before start:

  1. Which build system should I use?
  2. Which unit testing framework?

Choosing Build System (Meson)

I have used before Make, Maven, Scons, Gradle and Autotools.

But I have some reasons to try find something else.

Autotools
It is not easy to configure and maintain. There are several configuration files and several configuration steps.
Gradle
C++ feature is still incubating. Not very fast. You can check a similar example project at Build C++ project with Gradle.
Make
I don't love the syntax. Files tends to get messy as project grows.
Scons
It is just slow.
Maven
It is slow and you might end up "Javatizing" your C++ project structure.

Note

I've listed just things I don't like, those projects have other great features.

Now I'm considering Meson or CMake.

CMake has a big advantage over Meson, it is mature and widely used in many projects, which means there are many examples and it will fulfill your C++ project building needs.

Meson is a young project compared with CMake, but it is growing quite fast and it has been adopted in other big projects like Gnome, they have an initiative to port from Autotools to Meson.

I've chosen Meson because:

  • Syntax is really clear to me, when I read meson.build file I can quickly understand what is happening during build process.
  • It is fast. Altought it is written in Python, it generates a Ninja build project. First time you configure the project you has to run Meson, but for building or testing you are actually running Ninja.
$ meson build . # first time you configure the project
$ cd build
$ ninja build   # each time you build it
$ ninja test    # each time you run tests

I've found two interesting comparisons about available C++ build systems, they might be a little be biased because those comparisons come from Meson and Scons.

Unit Testing Framework

I have used some xUnit based libraries like UnitTest++, CppUTest or Google Test which match perfectly with Google Mock. If you want a safe bet that fulfills almost of your testing needs I highly recommend Google Test.

But time ago I found a testing framework with some interesting features, Catch:

  • It is just a header file with no external dependencies, so very easy to start (wget + include downloaded file).
  • You can use normal unit test style or BDD-style

If you want to know more about Catch, I recommend you to give it a try, it is a matter of 2 minutes to have a simple example up and running. You can also read some interesting articles like Why do we need yet another C++ test framework? or Testing C++ With A New Catch.

doctest: A Catch alternative

There is another testing framework named doctest, with same benefits as Catch, but it promises to be faster and lighter (benchmark results) than Catch.

doctest is modeled after Catch and some parts of the code have been taken directly, but there are differences.

It hasn't been easy to decide, both are really similar, you can check here differences between project using doctest and project using Catch.

I've finally chosen doctest because it promises to be faster: benchmark results.

Note

I've created project using both frameworks you can find them in corresponding branches: doctest branch or catch branch.

Hint

You can see diferencies between projects at: https://github.com/carlosvin/uuid-cpp/pull/1

Example

I've created an example to illustrate this article: https://github.com/carlosvin/uuid-cpp.

It is a basic implementation of UUID pseudo-random generator based on mt19937 which is not cryptographically secure.

Project output artifacts

  • Shared library: libuuid.
  • Header library for developers who want to use the shared library: include/Uuid.h.
  • Executable uuidgen (UUID generator).
  • Test executable (not installed). It tests shared library.

For example, if you execute ninja install on Linux, you will get something like:

/usr/local/lib/libuuid.so
/usr/local/include/Uuid.h
/usr/local/bin/uuidgen

Project structure (Fork project)

  • meson.build

    Root project file configuration. It defines project properties and subdirectories.

    project(
        'cpp-meson-example', # project name
        'cpp', # C++ project, e.g: for C project
        version : '1.0.0',
        license : 'MIT',
        default_options : ['cpp_std=c++11']) # compile for C++
    
    # it will be referred from subdir projects
    inc = include_directories('include')
    
    # meson will try to find a meson.build file inside following directories
    subdir('include')
    subdir('src')
    subdir('test')
    
  • include
    • meson.build

      Subdirectory build configuration file.

      # Select header files to be installed
      install_headers('Uuid.h')
      
    • Uuid.h

      Header file, it is the library interface definition which will be included from projects using that library

      namespace ids {
      
      class Uuid {
          private:
          // ...
      
  • src
    • meson.build (src)

      It declares 2 output artifacts libuuid and uuidgen.

      libuuid = shared_library(
          'uuid', # library name
          'Uuid.cpp', # source files to be compile
          include_directories : inc, # previously declared include directories in root :code:`meson.build`
          install : true) # :code:`libuuid` will be part of project installation
      
      uuidgen = executable(
          'uuidgen', # executable name
          'main.cpp', # source files to compile
          include_directories : inc, # previously declared include directories in root :code:`meson.build`
          link_with : libuuid, # linking executable with shared previously declared shared library :code:`libuuid`
          install : true) # :code:`uuidgen` executable be part of project installation
      
    • main.cpp

      Entry point for main executable uuidgen

      #include "Uuid.h"
      #include <iostream>
      
      int main()
      {
          ids::Uuid uuid;
          std::cout << uuid.to_str() << std::endl;
          return 0;
      }
      
    • Uuid.cpp

      Implementation of declared class in header file.

      #include "Uuid.h"
      
      Uuid::Uuid()
      { // ...
      
  • test
    • meson.build (test)

      File to configure tests build process.

      testexe = executable(
          'testexe', # test executable name
          'uuid_test.cpp', # tests source files to be compiled
          include_directories : inc,  # declared include directories in root :code:`meson.build`
          link_with : libuuid) # link test executable with previously declared shared library :code:`libuuid`
      
      # test execution
      test('Uuid test', testexe)
      
      # we can specify other test execution passing arguments or environment variables
      test('Uuid test with args and env', testexe, args : ['arg1', 'arg2'], env : ['FOO=bar'])
      
    • doctest.h

      doctest library in a single header file. You can try to automate library installation as part of your build process, but I haven't figure out yet a way to do it with Meson. For now I've installed it manually:

      cd test
      wget https://raw.githubusercontent.com/onqtam/doctest/master/doctest/doctest.h
      
    • uuid_test.cpp

      Tests implementation.

       // This tells doctest to provide a main() - only do this in one cpp file
      #define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN
      
      #include "doctest.h"
      #include "Uuid.h"
      #include <string>
      
      constexpr int MAX_ITERS = 100;
      
      TEST_CASE( "Uuid" ) {
          for (int i=0; i<MAX_ITERS; i++) {
              ids::Uuid uuid;
              std::string uuid_str {uuid.to_str()};
      
              MESSAGE(uuid_str);
              CHECK(uuid.most > 0);
              CHECK(uuid.least > 0);
              CHECK(uuid_str.size() == 36);
          }
      }
      
      // BDD style
      
      SCENARIO( "UUID creation" ) {
      
          GIVEN( "A random UUID " ) {
              ids::Uuid uuid;
              std::string uuid_str {uuid.to_str()};
      
              CHECK(uuid_str.size() == 36);
      
              WHEN( "get the most and least" ) {
                  THEN( "should be more than 0" ) {
                      CHECK( uuid.most > 0);
                      CHECK( uuid.least > 0);
                  }
              }
          }
      }
      

Filesystem in C++17

Gettting started with Experimental Filesystem Features C++17 (g++)

We just have to "tell" compiler that we write C++17 code (-c++1z) and it has to add standard library with filesystem library (-lstdc++fs).

g++ -std=c++1z main.cpp -lstdc++fs && ./a.out

Let's see a simple example with std::filesystem::path class.

#include <experimental/filesystem>
#include <iostream>

namespace fs = std::experimental::filesystem;
using namespace std;

int main()
{
    fs::path aPath {"./path/to/file.txt"};

    cout << "Parent path: " << aPath.parent_path() << endl;
    cout << "Filename: " << aPath.filename() << endl;
    cout << "Extension: " << aPath.extension() << endl;

    return 0;
}

Compile and run: Basic C++17 example

Run output is:

$ g++ -std=c++1z main.cpp -lstdc++fs && ./a.out
$ ./a.out

Parent path: "./path/to"
Filename: "file.txt"
Extension: ".txt"

C++17 Filesystem Features

In this section, we are going to explain some std::filesystem features with examples, which will help us to highlight differences between C++11 and C++17 so we can get a better idea about what this new library will supply and how it might make developer's work easier.

std::filesystem::path

Upper we have seen a tiny use case for std::filesystem::path. That is a quite powerful and convenient feature that supplies an multi-platform abstraction for paths to files using the correct directory path separator depending on the platform we are building our application for (\ for Windows based systems and / Unix based systems).

Directory separator

When we want our application to use the correct directory separator in C++11, we could use conditional macro declaration:

#include <iostream>

using namespace std;

#ifdef _WIN32
const string SEP = "\\";
#else
const string SEP = "/";
#endif

int main()
{
    cout << "Separator in my system " << SEP << endl;
    return 0;
}

Compile and run: C++11 separator example

With C++17 it is just simpler:

#include <experimental/filesystem>
#include <iostream>

namespace fs = std::experimental::filesystem;
using namespace std;

int main()
{
    cout << "Separator in my system " << fs::path::preferred_separator << endl;
    return 0;
}

Compile and run: C++17 separator example

Directory Separator Operator

std::filesystem::path implements / operator which allows to easily concatenate paths to files and directories.

When we want to concatenate paths in C++11, we have to add extra logic to avoid adding duplicate separators and to select the correct separator for target platform:

#include <iostream>

using namespace std;

#ifdef _WIN32
const string SEP = "\\";
#else
const string SEP = "/";
#endif

int main()
{
    string root {"/"};
    string dir {"var/www/"};
    string index {"index.html"};

    string pathToIndex{};
    pathToIndex.append(root).append(SEP).append(dir).append(SEP).append(index);

    cout << pathToIndex << endl;
    return 0;
}

Compile and run: Concatenate paths in C++11.

Checking program output we notice it is not fully correct, we should have checked whether path parts already contains a separator so we don't append another separator again. That logic is already implemented in std::filesystem::path, so C++17 can be like:

#include <experimental/filesystem>
#include <iostream>

namespace fs = std::experimental::filesystem;
using namespace std;

int main()
{
    fs::path root {"/"};
    fs::path dir {"var/www/"};
    fs::path index {"index.html"};

    fs::path pathToIndex = root / dir / index;

    cout << pathToIndex << endl;
    return 0;
}

Compile and run: Concatenate paths in C++17. Code is cleaner and just correct, there are no duplicated separators.

Create/Remove Directories

std::filesystem comes with some utilities to create and remove files and directories, but firstly let's check out a way to do so in C++11.

#include <iostream>
#include <cstdio>
#include <sys/stat.h>

using namespace std;

int main()
{
    auto opts = S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH;
    mkdir("sandbox", opts);
    mkdir("sandbox/a", opts);
    mkdir("sandbox/a/b", opts);
    mkdir("sandbox/c", opts);
    mkdir("sandbox/c/d", opts);

    system("ls -la sandbox/*");

    remove("sandbox/c/d");
    remove("sandbox/a/b");
    remove("sandbox/c");
    remove("sandbox/a");
    remove("sandbox");

    system("ls -la");

    return 0;
}

Compile and run: Create and remove directories C++11. We have to create/remove one by one. We could rewrite this code snippet with less lines (using a loop), but we still have to pay attention to creation/deletion order, we cannot remove parent directory before we have removed all children.

Since C++17 we can create and remove nested directories with just one call.

#include <experimental/filesystem>
#include <iostream>

namespace fs = std::experimental::filesystem;
using namespace std;

int main()
{
    fs::create_directories("sandbox/a/b");
    fs::create_directories("sandbox/c/d");
    system("ls -la sandbox/*");

    cout << "Were directories removed? " << fs::remove_all("sandbox") << endl;
    system("ls -la");

    return 0;
}

Compile and run: Create and remove directories C++17.

Full example: Recursive Directory Iterator

This example consists of iterate recursively through dicrectories fintering files by extension.

To keep C++11 example simple, I haven't added filtering logic, but it is in C++17 example:

recursive-directory/filesystem.11.cpp (Source)

#include <dirent.h>
#include <cstring>
#include <iostream>
#include <fstream> // std::ofstream
#include <vector>
#include <memory>
#include <system_error>
#include <sys/stat.h>

using namespace std;

const string UP_DIR = "..";
const string CURRENT_DIR = ".";
const string SEP = "/";


string path(initializer_list<string> parts)
{
    string pathTmp {};
    string separator = "";
    for (auto & part: parts)
    {
        pathTmp.append(separator).append(part);
        separator = SEP;
    }
    return pathTmp;
}

vector<string> getDirectoryFiles(const string& dir, const vector<string> & extensions)
{
    vector<string> files;
    shared_ptr<DIR> directory_ptr(opendir(dir.c_str()), [](DIR* dir){ dir && closedir(dir); });
    if (!directory_ptr)
    {
        throw system_error(error_code(errno, system_category()), "Error opening : " + dir);
    }

    struct dirent *dirent_ptr;
    while ((dirent_ptr = readdir(directory_ptr.get())) != nullptr)
    {
        const string fileName {dirent_ptr->d_name};
        if (dirent_ptr->d_type == DT_DIR)
        {
            if (CURRENT_DIR != fileName && UP_DIR != fileName)
            {
                auto subFiles = getDirectoryFiles(path({dir, fileName}), extensions);
                files.insert(end(files), begin(subFiles), end(subFiles));
            }
        }
        else if (dirent_ptr->d_type == DT_REG)
        {
            // here we should check also if filename has an extension in extensions vector
            files.push_back(path({dir, fileName}));
        }
    }
    return files;
}

int main ()
{
    auto opt = S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH;
    mkdir("sandbox", opt);
    mkdir("sandbox/a", opt);
    mkdir("sandbox/a/b", opt);

        vector<string> e_files = {
            "./sandbox/a/b/file1.rst",
            "./sandbox/a/b/file1.txt",
            "./sandbox/a/file2.RST",
            "./sandbox/file3.md",
            "./sandbox/will_be.ignored"
        };

        // create files
        for (auto &f: e_files)
        {
                ofstream of(f, ofstream::out);
                of << "test";
        }

    cout << "filtered files: " << endl;
        for (auto &f: getDirectoryFiles(".", {".rst", ".RST", ".md"})){
            cout << "\t" << f << endl;
        }

    return 0;
}

Compile and run C++11 example.

Following example also filters files by extension.

recursive-directory/filesystem.17.cpp (Source)

#include <experimental/filesystem>
#include <iostream>
#include <vector>
#include <fstream>
#include <algorithm>    // std::find

namespace fs = std::experimental::filesystem;
using namespace std;

vector<string> getDirectoryFiles(const string & dir, const vector<string> & extensions)
{
        vector<string> files;
    for(auto & p: fs::recursive_directory_iterator(dir))
    {
        if (fs::is_regular_file(p))
        {
                if (extensions.empty() || find(extensions.begin(), extensions.end(), p.path().extension().string()) != extensions.end())
            {
                files.push_back(p.path().string());
            }
        }
    }
    return files;
}

int main()
{
    fs::create_directories("sandbox/a/b");
        vector<string> e_files = {
            "./sandbox/a/b/file1.rst",
            "./sandbox/a/b/file1.txt",
            "./sandbox/a/file2.RST",
            "./sandbox/file3.md",
            "./sandbox/will_be.ignored"
        };

        // create files
        for (auto &f: e_files)
        {
                ofstream(f) << "test";
        }

    cout << "filtered files: " << endl;
        for (auto &f: getDirectoryFiles(".", {".rst", ".RST", ".md"})){
            cout << "\t" << f << endl;
        }

    return 0;
}

Compile and run C++17 example.

Multi-Domain Docker Containers

Use case

We have several server applications in the same development environment, each application is bundled in a Docker container, e.g: "Container A" and "Container B".

With Docker those applications have the same IP address. One way to differentiate and access to an specific application is exposing different ports.

/galleries/docker-multidomain/ip.thumbnail.png

Containers exposing the same IP address and different ports

But that solution is a little bit confusing, does 8080 mean we are accessing to "application A"?

It would be simpler and easier to remind something like:

/galleries/docker-multidomain/domain.thumbnail.png

Accessing applications by domain name

Get that extra semantic value is much simpler than I thought at the beginning and you will see below.

How to Configure Multi-Domain Reverse Proxy

I said it is easy, because we almost have to do nothing, another container will do it for us, especifically we are going to use nginx-proxy, it will automatically generate the required NGINX configurations.

So, we will have 2 applications + 1 proxy, that is 3 containers.

Note

You can download the full example at https://github.com/carlosvin/docker-reverse-proxy-multi-domain

/galleries/docker-multidomain/proxy.thumbnail.png

3 containers, 2 applications + 1 proxy

Example Project Structure

  • docker-compose.yaml (Main configuration file describing architecture in previous picture)
  • a (Application A directory)
    • Dockerfile (Container A configuration file)
  • b (Application B directory)
    • Dockerfile (Container B configuration file)

Ver proyecto.

Architecture Configuration (docker-compose)

The relationships between containers is the most interesting part in this example.

docker-reverse-proxy-multi-domain/docker-compose.yaml (Source)

a:
  build: a
  environment:
    VIRTUAL_HOST: a.domain.com
  restart: always

b:
  build: b
  environment:
    VIRTUAL_HOST:  b.domain.com
  restart: always

nginx-proxy:
  image: jwilder/nginx-proxy
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro

  restart: always
  privileged: true
  • Lines 4 and 10: we configure the domain name for each application.
  • From line 13 there is proxy configuration (copy/paste part).
  • In lines 2 and 8 we tell docker-compose has to build Docker images within specified directory. For example, in line 2, we are saying that docker-compose has to build a Docker image using ./a/Dockerfile file.

Application Image Configuration

docker-reverse-proxy-multi-domain/a/Dockerfile (Source)

FROM httpd:2.4
RUN echo "<html><body><h1>A</h1>App A works!</body></html>" > /usr/local/apache2/htdocs/index.html

Line 1: We import an image with an apache server.

Line 2: It serves a file that prints "Host A" as default page.

The configuration for application B is pretty much the same:

docker-reverse-proxy-multi-domain/b/Dockerfile (Source)

FROM httpd:2.4
RUN echo "<html><body><h1>B</h1>App B works!</body></html>" > /usr/local/apache2/htdocs/index.html

Adding domain names to your development environment configuration

In Linux we just have to map the local address to domain names you have chosen, in the example a.domain.com and b.domain.com.

1
2
3
4
5
#/etc/hosts
127.0.0.1             localhost.localdomain localhost
::1                 localhost6.localdomain6 localhost6
127.0.0.1   a.domain.com
127.0.0.1   b.domain.com

I just added 4 and 5 lines.

Everything ready!

Now we just have to test the example:

docker-compose build
docker-compose up

The 3 containers are running now.

So we can open our favourite web browser and go to a.domain.com. It will show App A works!. If we go to b.domain.com then we will see App B works!.

/galleries/docker-multidomain/a.screenshot.thumbnail.png

a.domain.com

/galleries/docker-multidomain/b.screenshot.thumbnail.png

b.domain.com

Note

In most of Linux distros you will need privileges to run Docker commands (sudo).

Rust web frameworks comparison

I'm doing some experiments with Rust because it is a language that promises to be as fast as C/C++, but safer in regards to memory management. Essentially, it doesn't allow the developer to do "bad things" with the memory like: forgetting release the memory that is not going to be used anymore or release memory if the developer is not the owner. In such scenarios, Rust won't compile.

Just for learning I've started a small project that offers a REST API, so I've started looking for frameworks to ease/speed up the development. I've found a Rust web frameworks comparison: https://github.com/flosse/rust-web-framework-comparison

Convert files formats: Windows to Unix

If you are developing from a Windows environment to a Unix target environment, most likely you have had this issue: You install source files in Windows format in your Unix environment.

There is a way quite simple to convert all your files from Windows to Unix format:

find . -type f -print0 | xargs -0 dos2unix

I got it, of course, form http://stackoverflow.com/questions/11929461/how-can-i-run-dos2unix-on-an-entire-directory

REST URLs

First time I designed a REST API I made several mistakes, of course. Following I'm going to explain common mistakes and what I've learned about REST URL with examples.

REST Basics

  • Using URLs for get resources.
  • Using verbs for modify resources.
  • The verbs are provided by the HTTP protocol.
  • The verbs have a direct equivalency with CRUD [1].
  • To access to an existent resource we need an identifier.

REST Verbs

POST
Create new resources.
GET
Read already existing resources.
PUT
Update already existing resources.
DELETE
Delete already existing resources.

It is clearer in the following table

REST Verb CRUD Action Resource must exist
POST Create No
GET Read Yes
PUT Update Yes
DELETE Delete Yes

Accessing to Resources

A resource is what we want to get. For example, a car.

To be able to get a car, that information is not enough, you can't go to your car dealer and ask for whatever car, you have to specify which one you want:

Good morning. I'd like to have a Fiat Bravo 1.9 Emotion 120CV.

In this manner the sheller knows which one is.

"Fiat Bravo 1.9 Emotion 120CV" is the identifier.

Transferring the example to REST APIs:

GET   http://cardealer.com/api/cars/fiat-bravo-19-emotion-120cv

Now our API can supply the car info.

This is a very simple example, but actually when we access to a specific resource, we have to use something to identify it, a common and recommendable practice is use UUID.

GET  http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

But our API, like a shop, it hasn't to be so strict. We can ask for cars with several features:

Good morning, I want a Fiat Bravo.

Then, the dealer kindly will show you all Fiat Bravo he has available. Let's see how API says that.

GET  http://cardealer.com/api/cars/?brand=fiat&model=bravo

API will return all cars with Fiat brand and Bravo model.

Brand and model are so called query parameters.

As you might already notice, to get resource information, we have always used GET verb

Update resources

The API should also support updating resources. Like reading resources, to update a resource we have to specify which resource we want to update, so we again need an identifier.

Before, we wanted to get information (read) and we used GET verb. Now the only difference is the verb.

We want to update so we use the equivalency HTTP verb: PUT.

PUT   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

Actually something else is missing, we have to say what thing of the car we want to change, for example, let's imagine we want to change the engine power and set it to 100CV.

We have to send the new engine power to following URL http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f through HTTP using PUT verb.

HTTP protocol allows sending data within PUT message, we have to choose a sending format.

We can use JSON or XML or whatever, we only have to ensure that sent format is expected in server side.

Note

Designing a REST API requires select a sending data format.

JSON example:

{ enginePower: 100 }

Delete Resources

Let's imagine that now we are the car dealer and we don't want to shell the Fiat Bravo Emotion 1.9CV anymore (the cce05bee-386b-11e5-a151-feff819cdc9f). We'll keep the URL that identifies the resource, but we change the verb: we don't want to read (GET), we don't want to update (PUT), we want to to delete (DELETE).

DELETE   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f

We don't have to supply any additional info, only de verb (DELETE) and the resource identifier.

Create Resources

And the last verb is to create (POST). In this case we don't have to identify the resource, because it still doesn't exist.

POST   http://cardealer.com/api/cars/

But we have to send the data to create the resource.

Following with the example, let's create a new car, so we include the necessary data within POST HTTP message, it is something similar what we did at section Update resources, but we are going to send all required data, not only the engine power.

JSON example:

{
"brand": "Fiat",
"model": "Bravo"
"year": 2010
"doors": 5,
"enginePower": 120,
"version": "Emotion",
"clima": true,
"ac": false,
"fuel": "Diesel"
}

We can delegate on the system to assign a new identifier, or simply send it within the message:

{
"identifier": "cce05bee-386b-11e5-a151-feff819cdc9f"
"brand": "Fiat",
"model": "Bravo"
"year": 2010
"doors": 5,
"enginePower": 120,
"version": "Emotion",
"clima": true,
"ac": false,
"fuel": "Diesel"
}

Collections

All actions we have already explained were actually applied over a cars collection.

But, what happen if a resource has a nested collection?

Continuing with cars example, a car can use a set of engine oils. So the API must allow update, delete or create elements in the set.

Note

For the example we will assume that the oil identifier is the attribute type.

Add an element to collection

When we add a car to cars collection, what we do is create a new car, so it is the case of Create Resources.

To add a new engine oil to the car cce05bee-386b-11e5-a151-feff819cdc9f, that already exists:

POST   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

{
"type": "5W30",
"otherInfo": "This is the best oil for this car"
}

If we want to add another one:

POST   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

{
"type": "10W30",
"otherInfo": "This is very good for cold weather"
}

Update a collection item

If we want to update the info of oil 5W30 of car cce05bee-386b-11e5-a151-feff819cdc9f:

PUT   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/5W30/

{
"type": "5W30",
"otherInfo": "This is no longer the best oil for this car"
}

Delete a collection item

To delete an oil 10W30 from car cce05bee-386b-11e5-a151-feff819cdc9f:

DELETE   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/10W30

Read a collection item

To get the oil info 10W30 of the car cce05bee-386b-11e5-a151-feff819cdc9f:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/10W30

List collection items

As we have seen at Read a collection item, we can get the info of every collection element, but we also can get multiple collection elements, sorted, paged and apply typical collection actions.

We can get all supported oils for a car cce05bee-386b-11e5-a151-feff819cdc9f, it is as simple as:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/

We can also get sorted items:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?sort_by=type&order=asc

We can ask API to return the first 10 oils for car cce05bee-386b-11e5-a151-feff819cdc9f:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?number_of_elements=10

API can support also pagination:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?page=3&number_of_elements=2

Above request is telling API that returns the page 3 of all oils of car cce05bee-386b-11e5-a151-feff819cdc9f and it has to shown 2 oils per page. If we want to go to next page:

GET   http://cardealer.com/api/cars/cce05bee-386b-11e5-a151-feff819cdc9f/oils/?page=4&number_of_elements=2

All those features are supported by query parameters.

Common mistake

First time I tried to design a API REST I designed an API, but REST.

My main mistake was the URLs design, I added my own verbs skipping HTTP verbs.

For example:

POST    http://example.com/api/cars/ford-focus/delete-oil/5W30

The right:

DELETE  http://example.com/api/cars/ford-focus/oils/5W30

Video Tutorials

These 2 videos help me to understand REST URLs, I encourage you to watch them full:

[1] Create, Read, Update, Delete

C++ Dependency Management: Biicode

I'm interested in building, dependency management, packagin and deployment in regards to software projects. For Java, Scala, Python, and so on is quite easy since there are tools like Maven, Gradle, pip, Sbt, etc. But regarding C++_, the best options I've found are Maven with Nar plugin or Gradle with cpp plugin (incubation).

I knew about Biicode almost 2 years ago, but I've never found time to test it, until today.

How does Biicode work?

Firstly we have to install Biicode.

I've made tiny example project using logging system from Poco library.

I've executed this command to create the project, called bii_log.

bii new carlovin/bii_log --hello=cpp

I've created the project under my Biicode username, just in case I'd like to publish later.

Previous command generates the structure of files and directories, although we are going to focus only on:

blocks/carlosvin/bii_log/main.cpp
blocks/carlosvin/bii_log/biicode.conf

In biicode.conf we are going to configure our dependencies, in this example Poco library.

# Biicode configuration file

[requirements]
    fenix/poco(develop): 0

[parent]
        carlosvin/bii_log: 0

[includes]
    Poco/*.h: fenix/poco/Foundation/include

In [includes] section, we are overriding the path to file headers. If we don't override it we'd had to do something like this:

#include "fenix/poco/Foundation/include/Logger.h"

Thanks to this line, include declarations are going to be clearer, as follows:

#include "Poco/Logger.h"

Easy, now we can start using Poco in our project, e.g:

#include "Poco/FileChannel.h"
#include "Poco/FormattingChannel.h"
#include "Poco/PatternFormatter.h"
#include "Poco/Logger.h"
#include "Poco/AutoPtr.h"

using Poco::FileChannel;
using Poco::FormattingChannel;
using Poco::PatternFormatter;
using Poco::Logger;
using Poco::AutoPtr;

int main(int argc, char** argv) {
        AutoPtr<FileChannel> pChannel(new FileChannel);
        pChannel->setProperty("path", "log/sample.log");
        pChannel->setProperty("rotation", "100 K");
        pChannel->setProperty("archive", "timestamp");

        //AutoPtr<ConsoleChannel> pCons(new ConsoleChannel);
        AutoPtr<PatternFormatter> pPF(new PatternFormatter);
        pPF->setProperty("pattern", "%Y-%m-%d %H:%M:%S %s: %t");
        AutoPtr<FormattingChannel> pFC(new FormattingChannel(pPF, pChannel));
        Logger::root().setChannel(pFC);

        Logger & logger = Logger::get("TestChannel");
        for(int i=0; i<10000; i++){
                poco_information(logger, "This is a info");
                poco_warning(logger, "This is a warning");
        }
        return 0;
}

To compile the project we only have to execute following command:

bii cpp:build

To publish the project and to allow everyone use it as we have used Poco:

bii publish

Besides the ease to use, I like so much the integration with Eclipse with CDT. After "bii cpp:build" execution all files were properly indexed.

I've read also an article about the good integration with CLion: When CLion met biicode.

Software Maintenance

Few days ago at work, I had to fulfill a document where I had to select a type of software maintenance that I was going to apply.

The fact was I had only two choices, it seemed me very weird because during my degree I studied 3 or 4 kinds of software maintenance.

Today I have found my Software Engineering class notes, then the types of Software Maintenance sorted descending by percentage of time spent:

Perfective: Activities to improve or add new functionalities required by the user.
Adaptative: Activities to adapt the system to technological environment changes (hardware or software).
Corrective: Fix defects in hardware or software detected by user running the production system.
Preventive: Activities to ease the future system maintenance.

Build C++ project with Gradle

Introduction

I am more and more worried about building, dependency management and distribution of my projects. I'd like to find a tool unifies those processes with independence of the language. I know several tools those almost fit to what I'm looking for, like I know several tools those almost fit to what I'm looking for, like SCons, Autotools, Ant, Maven and lately Gradle.

I've made several projects with Gradle, but always I was focused in Java and Android projects. In Java projects I've found a Maven replacement, because it is faster, easier and less verbose. About Android projects I suffered the early adoption of Android Studio + Gradle, although currently I think the are more mature and they work fine.

First of all, I have to say: building C/C++/Objective-C projects with Gradle is in incubation phase, although now we can perform advanced tasks like:

  • Generation several artifacts within same project (libraries and executables).
  • Dependency management between artifacts (no versions).
  • Different "flavors" of the same software, e.g: we can generate a “Community” release and other one with more enabled features called “Enterprise”.
  • It allows multi-platform binary generation.

As I said, this plugin is still having limitations although they are working on it: Gradle C++ roadmap. If they achieve it I'll leave Autotools (I'm going to regret saying that).

Read more…