Connectivity options for mobile M2M/IoT/Connected devices

Connectivity options for mobile M2M/IoT/Connected devices

Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings, etc.The backend problem is work but mostly “solved”. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend.

To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location.

Architecture

Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs.

Active Push

The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server.

Push architecture

 

Passive Pull

The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations.
The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership).
Pull architecture

 

Technology requirements

The technologies that come to my mind are Barcode, QR-Code, play some humanly not hearable noise and decode in an app, NFCZigBee6LoWPANBluetooth, Bluetooth Smart, GSM, UMTS, LTE, NB-IOT. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other. 

World wide usable

Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis).

Power consumption

Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them.

Range

Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing).

Pick two of three

Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities.
The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t.
Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section.

Technologies

TechnologyBandsGlobal coverageRangeBattery neededScan Device neededCost of deviceArch.Comment
Barcode/QR-CodeOpticalYesCentimetersNoApp scanning barcode requiredextremely lowPullSticker needs to be hard to remove and visible, maybe embedded to the frame
Play audioNon human hearable audioYesCentimetersYesApp recording audiomoderatePullButton to play audio?
NFC13.56 MhzYesCentimetersNoYesextremely lowPullPrivacy issues
RFIDManyYes, but not on single bandCentimeters to metersYesReceiver requiredlowPullMany bands, specific readers needed
Bluetooth LE2.4 GhzYesMetersYesYes, but commonlowPull/PushCompetes with Wifi for spectrum
ZigBeeMultipleYes, but not on single bandMetersYesYesmidPushNot commonly deployed, software more involved
6LoWPANLike ZigBeeLike ZigBeeMetersYesYeslowPushUses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation
GSM800/900, 1800/1900Almost besides South Korea, Japan, some islandsKilometersYesNomoderatePushAlmost global coverage, direct communication with backend possible
UMTSManyLess than GSM but South Korea, JapanMeters to Kilometers depends on usageYesNohighPushHigher power usage than GSM, higher device cost
LTEManyLess than GSMDesigned for kilometersYesNohighPushExpensive, higher power consumption
NB-IOT (LTE)ManyNot deployedKilometersYesNohighPushNot deployed and coming in the future. Can embed GSM equally well into a LTE carrier

Conclusion

Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve.
For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts.
Punched holes in the great firewall?

Punched holes in the great firewall?

I am in Shanghai right now and I was surprised that I could access the forbidden fruits, e.g. m.facebook.com:

 1  172.31.24.1 (172.31.24.1)  1.359 ms  1.079 ms  1.049 ms
 2  180.168.46.17 (180.168.46.17)  1.262 ms  1.180 ms  1.073 ms
 3  10.10.96.145 (10.10.96.145)  29.356 ms  29.781 ms  29.607 ms
 4  10.10.199.179 (10.10.199.179)  40.466 ms  40.220 ms  40.559 ms
 5  203.131.250.221 (203.131.250.221)  40.208 ms  40.581 ms  40.139 ms
 6  ae-0.facebook.chwahk02.hk.bb.gin.ntt.net (203.131.241.62)  41.732 ms  46.681 ms  42.098 ms
 7  psw01a.hkg3.tfbnw.net (173.252.64.176)  42.154 ms
    psw01b.hkg3.tfbnw.net (173.252.64.175)  42.104 ms
    psw01c.hkg3.tfbnw.net (173.252.64.174)  40.703 ms
 8  msw1ad.01.hkg3.tfbnw.net (204.15.21.235)  41.912 ms
    msw1ab.01.hkg3.tfbnw.net (204.15.21.247)  41.140 ms
    msw1as.01.hkg3.tfbnw.net (173.252.65.92)  40.938 ms

 9  edge-star-mini-shv-01-hkg3.facebook.com (31.13.95.36)  42.862 ms  41.607 ms  41.950 ms
So this first goes to an ISP in Shanghai? Then back into a private network and re-surfacing in Hong Kong and then having access to the forbidden fruit? Is that normal routing?
Leaving Berlin, saying hello to Amsterdam

Leaving Berlin, saying hello to Amsterdam

Berlin continues to gain a lot of popularity, culturally and culinarily it is an awesome place and besides increasing rents it still remains more affordable than other cities. In terms of economy Berlin attracts new companies and branches/offices as well. At the same time I felt the itch and it was time to leave my home town once again. In the end I settled for the bicycle friendly (and sometimes sunny) city of Amsterdam.

My main interest remains building reliable systems with Smalltalk, C/C++, Qt and learn new technology (Tensorflow? Rust? ElasticSearch, Mongo, UUCP) and talk about GSM (SCCP, SIGTRAN, TCAP, ROS, MAP, Diameter, GTP) or get re-exposed to WebKit/Blink.
If you are in Amsterdam or if you know people or companies I am happy to meet and make new contacts.
C++, Qt and Treefrog to build user facing web applications

C++, Qt and Treefrog to build user facing web applications

In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework to build a full web application.

You might wonder why one would want to build such a thing in a statically and compiled language instead of something more dynamic. There are a few reasons for it:

  • Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network.
  • Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++.
  • Compilation/Distribution: By (cross-)building the application there is  a “single” executable and we don’t have the dependency mess of Ruby.
The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of “Qt” for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework.
I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my “advanced” topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home.
My favorite features:
  • tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well.
  • The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times.
  • C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect.
If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far.
HNBAP and RANAP support in Osmocom.org

HNBAP and RANAP support in Osmocom.org

Sysmocom is in the process of adding 3G support to OpenBSC. This is done in the nature of using an existing hNodeB Femtocell that exposes the Iuh interface. The binary protocol for Iuh (and related protocols) is defined using the Abstract Syntax Notation One (ASN.1) and the aligned packed encoding rules (APER) are used to encode/decode the data.

After exploring several options LaForge picked the one that adds APER support to Lev Walkin’s ASN1 compiler, simplifies the 3GPP ASN.1 input files and then have a python script to post-process the result.

The next issue comes that we have two protocol suites, HNBAP and RANAP, and want to use them inside the same codebase. To avoid having conflicting types LaForge extended the asn1c compiler to add a prefix to generated types and we are using this patched compiler.

At this point we had an encoder/decoder for the HNBAP and RANAP protocols and could begin on writing our own Free Software HNB-GW. After working with the HNB-GW code Daniel noticed several crashes. The crashes were related to making deep copies of the decoded data and it took several iterations to not leak and not double free.

We have not had crashes, leaks or other issues in this part of the code for quite a bit and it seems time for a formal release. To build the osmo-iuh module you will need:

  • master of libosmocore
  • master of libosmo-abis
  • master of libosmo-netif
  • sysmocom/iu of libosmo-sccp
  • aper-prefix of asn1c
  • master of libasn1c
  • master of osmo-iuh
All of these modules can be found on git.osmocom.org and in osmo-iuh/contrib/jenkins.sh is a simple script to build the code, regenerate the source files and run the tests..
The work has been possible thanks to NLnet
Adding menu items in redmine (e.g. a link to contact information)

Adding menu items in redmine (e.g. a link to contact information)

The Osmocom project is switching from a list of separate trac projects to a single instance of redmine. One German legal requirement is to publish contact information (Impressum) and it needs to be easily discoverable. The easiest seems to be to add this to the top level menu of our redmine. This way it will be available on all pages and just one (or two on mobile to open the menu) clicks away.

The below can be placed in a file called plugins/impressum_plugin/init.rb and will use the Redmine::MenuManager to add an entry to the top level menu. The below is everything that is needed to add an entry. We want the item to be last and point to a wiki page.

# Add a Impressum link and point to a wiki page, add it last
Redmine::MenuManager.map :top_menu do |menu|
    menu.push(:impressum, ‘/projects/cellular-infrastructure/wiki/Contact’, :last => true)

end

osmo-pcap capture on the edge and store in the center

osmo-pcap capture on the edge and store in the center

Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically to improve your network, your roaming traffic, etc.

The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don’t know when is a good time to copy it.

Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed.

I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well.

If you are using or decided not to use this software I would be very happy to hear about it.

A C++ project without Qt

A C++ project without Qt

My primary language is Smalltalk (Pharo and GNU Smalltalk) and I didn’t write much C++1x code yet and for a new project I decided to use C++ and experiment. The task was rather simple, receive a message, check if this message type needs to be re-written, start to parse it, make a MongoDB look-up, patch it, send it.

While looking for MongoDB drivers for Qt I only found the pure C++ driver that is working with std::string, exceptions and only offers a blocking interface. At this point I decided to see how painful using stl is these days and opted for a pure stl project. Many years ago the only online documentation came from SGI, there were no examples, no autocompletion in editors, etc…

The application is relatively trivial. It is using lambdas to hook two components (the receiver and the patcher) together, it is using std::thread with lambdas to spawn worker threads because of the blocking interface of the database driver, std::chrono for some ad-hoc StatsD code to add event counters and performance monitoring.

So how did this experiment go?

The C++11/C++14 documentation that is available has improved over the old times, it is still below the quality of the Qt documentation.

Once deploying this on Debian I immediately had odd crashes and I blame ABI issues as my application used C++1x but the mongo-cxx-driver was apparently not using it. I ended up packaging the latest stable (“legacy”) standalone driver for debian and made sure it is compiled with C++1x.

The std::chrono API has potential (as it allows me to control the resolution) but the API, specially in C++11 without the new literals, makes easy things difficult. All I wanted was something like QTimer::elapsed() or QElapsedTimer::elapsed(). So I think the Qt API is better designed for this usecase (my measurement resolution is milliseconds and not picoseconds).

For socket code I was using pure C/libc. As my database code was blocking I could make my socket code blocking as well.

I normally use vim but when working with an unknown API I totally enjoy the comfort QtCreator is providing. I was able to jump to the header files and explore the interface. I think it has saved me quite some time while wrangling with the stl.

New languages come with their own build tools these days (Rust/Cargo, Ruby/Rake, …) and C++1x doesn’t have anything to offer here. The options I considered where autotools, CMake or qmake. I think autotools are overkill here, I dislike CMake for various reasons and I like the simplicity of qmake.

In the past I had played with the Google C++ Unittest framework and actually liked it a lot. These days one is asked to copy and paste the framework into the code base. It is an ongoing struggle if one should embed third party libraries or not but as my application is rather small I decided it is a no go. I have used and enjoyed the Qt testlib. So my STL only application is tested using QObject and QCoreApplication.

Conclusion:

The STL (and the documentation) has improved. I still prefer the API design of Qt and when not just interfacing with an library that is using std::string Qt is still my first choice (as it provides event loops, signals/slots, networking, http, etc.).

OpenCore and Python moving to Github

OpenCore and Python moving to Github

Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).

Not everything is great though. As a Free Software project one might decide that using proprietary javascript to develop and interact with a Free Software project is not acceptable, one might want to control the repository yourself and then people look for alternatives. At the Osmocom project we are using cgit, mailinglists, patchwork, trac, host our own jenkins and then mirror some of our repositories to github.com for easy access. Another is to find a platform like Github but that is Free and a lot of people look or point to gitlab.com.

From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.

So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.

Build or buy a GSM HLR? Is there an alternative?

Build or buy a GSM HLR? Is there an alternative?

The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in:

  • The vendor might enforce to run on a hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier.
  • It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied.
  • The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions).
  • There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can.
The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP).
But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious:
  • You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide.
  • The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires)
  • Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it.
  • Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well.
The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies.