CAMEL and protocol design

CAMEL and protocol design

Today I want to share the pain of running a production 3GPP TCAP/MAP/CAP system and network protocol design in general. The excellent Free Software ASN1/TCAP/MAP/CAP stack (which is made possible by the Pharo live programming environment) I helped creating is in heavy production usage (powering standard off-the-shelf components like a SGSN, an AuC or non-standard components to enable new business cases) and sees roaming traffic from a lot of networks. From time to time something odd comes up.

In TCAP/MAP/CAP messages but also Request/Response and the possible Errors are defined using ASN1. Over the last decades ETSI and 3GPP have made various major versions and minor releases (e.g. adding new optional attributes to requests/responses/errors). The biggest new standard is CAMEL and it is so big and complicated that it was specified in four phases (each phase with their own versions of the ApplicationContext, think of it as an versioned and entry into the definition for all messages and RPC calls).

One issue in supporting a specific module version (application-context-name) is to find the right minor release of 3GPP (either the newest or oldest for that ACN). Then it is a matter to copy and paste the ASN1 definition from either a PDF or a WordDocument into individual files.. and after that is done one can fix the broken imports (or modify the ASN1 parser to make a global look-up) and typos for elements.

This artificial barrier creates two issue for people implementing MAP/CAP using components. Some use inferior ASN1 tools or can’t be bothered to create the input files and decide to hardcode the message content (after all BER/DER is more or less just nested TLV entries). The second issue is related to time/effort as well. When creating the CAMEL ASN1 files I didn’t want to do the work four times (once for each phase) and searched for shortcuts too.

The first issue materialized itself by equipment sending completely broken messages or not sending mandatory(!) elements. So what happens if a big telco sends you a message the stack can’t decode, you look up the oldest and youngest release defining this ACN and see the element that is attempted to be parsed was always mandatory? Right, one adds an OPTIONAL modifier to be able to move forward…

The second issue is on me though. I started with a set of CAMEL phase3 files and assumed that only the operations (and their arguments/response) would be different across different CAMEL phases but the support structs they use would stay the same. My assumption (and this brings us to protocol design) was that besides the versioning of the module they would be conservative and extend supporting types in a forward compatible way and integrated phase2 and phase1 into the same set of files.

And then reality sets in and the logs of the system showed a message that caused an exception during parsing (normally only happens for the first kind of issue). An extension to the Request structure was changed in a not forward compatible way. Let’s have a look:

InitialDPArgExtension ::= SEQUENCE {

-naCarrierInformation [0] NACarrierInformation OPTIONAL,
-gmscAddress [1] ISDN-AddressString OPTIONAL,
-…
+ gmscAddress [0] ISDN-AddressString OPTIONAL,
*more new optional elements*
+ …,
+ enhancedDialledServicesAllowed [11] NULL OPTIONAL,
*more elements after the extension marker*
}

So one element (naCarrierInformation) got removed and then every following element was renumbered and the extension marker was moved further down. In theory the InitialDPArgExtension name binding exists once in the phase2 to definition and once in phase3 and 3GPP had all rights to define a new binding with different. An engineering question is if this was a good decision?

A change in application-context allows to remove some old cruft and make room for new. The tag space might be considered a scarce resource and making room is saving a resource. On the other hand in the history of GSM no other struct had ran out of tags and there are various other approaches to the problem. The above is already an extension to an extension and the step to an extension of an extension of an extension doesn’t seem so absurd anymore.

So please think of forward compatibility when designing protocols, think of the implementor and make the definition machine readable and please get the imports right so one doesn’t need to resort to a global symbol search. If you are having interesting core network issues related to TCAP, MAP and CAP consider contacting me.

MariaDB Galera and custom health probe for Azure LoadBalancer

MariaDB Galera and custom health probe for Azure LoadBalancer

My Galera set-up on Kubernetes and the Azure LoadBalancer in front of it seem to work nicely but one big TODO is to implement proper health checks. If a node is down, in maintenance or split from the network it should not be part of the LoadBalancer. The Azure LoadBalancer has support for custom HTTP probes and I wanted to write something very simple that handles the HTTP GET, opens a MySQL connection to the destination, check if it is connected to a primary. As this is about health checks the code should be small and reliable.

To improve my Go(-lang) skills I decided to write my healthcheck in Go. And it seemed like a good idea, Go has a powerful HTTP package, a SQL API package and two MySQL implementations. So the entire prototype is just about 72 lines (with comments and empty lines) and I think that qualifies as small. Prototyping the MySQL code took some iterations but in general it went quite quickly. But how reliable is it? Go introduced the nice concept of a context.Context. So any operation should be associated with a context and it should be passed as argument from one method to another. One can create a child context and associate it with a deadline (absolute time) or timeout (relative) and has a way to cancel it.

I grabbed the Context from the HTTP Request, added a timeout and called a function to do the MySQL check. Wow that was easy. Some polish to parse the parameters from the CLI and I am ready to deploy it! But let’s see how reliable it is?

I imagined the following error conditions:

  1. The destination IP is reachable but no one listening on the port. The TCP connection will fail quickly (SYN -> RST,ACK)
  2. The destination IP ends in a blackhole (no RST, ACK) received. One would have a large connect timeout
  3. The Galera node (or machine hosting it) is overloaded. While the connect succeeds the authentication or a query might stall
  4. The Galera node is split and not a master

The first and fourth error conditions are easy to test/simulate and trivial to implement properly. I then moved to the third one. My first choice was to implement an infinitely slow Galera node and did that by using nc -l 3006 to accept a TCP connection and then send nothing. I made a healthprobe and waited… and waited.. no timeout. Not after 2s as programmed in the context, not after 2min and not after.. (okay I gave up after 30 min). Pretty discouraging!

After some reading and browsing I saw an open PR to add context.Context support to the MySQL backend. I modified my import, ran go get to fetch it, go build and retested. Okay that didn’t work either. So let’s try the other MySQL implementation, again change the package imports, go get and go build and retest. I picked the wrong package name but even after picking the right package this driver failed to parse the Database URL. At that point I decided to go back to the first implementation and have a deeper look.

So while many of the SQL API methods take a Context as argument, the Open one does not. Open says it might or might not connect to the database and in case of MySQL it does connect to it. Let’s see if there is a workaround? I could spawn a Go routine and have a selective receive on the result or a timeout. While this would make it possible to respond to the HTTP request it does create two issues. First one can’t cancel Go routines and I would leak memory, but worse I might run into a connection limit of the Galera node. What about other workarounds? It seems I can play with a custom parameter for readTimeout and writeTimeout and at least limit the timeout per I/O operation. I guess it takes a bit of tuning to find good values for a busy system and let’s hope that context.Context will be used more in more places in the future.

Troubleshooting Kubernetes/Azure Storage

Troubleshooting Kubernetes/Azure Storage

In my previous posts I wrote about my set-up of MariaDB Galera on Kubernetes. Now I have some first experience with this set-up and can provide some guidance. I used an ill-fated TCP health-check that lead to MariaDB Galera blocking the originating IPv4 address from accessing the cluster due to never completing a MySQL handshake and it seems (logs are gone) that this lead to the sync between different systems breaking too.

When I woke up my entire cluster was down and didn’t recover. Some pods restarted and I run into a Azure Kubernetes bug where a Persistent Storage would be umounted but not detached. This means the storage can not be re-attached to the new pod. The Microsoft upstream project is a bit hostile but the issue is known. If you are seeing an error about the storage still being detached/attached. You can go to the portal, find the agent that has it attached and detach it by hand.

To bring the cluster back online there is a chicken/egg problem. The entrypoint.sh discovers the members of the cluster by using environment variables. If the cluster is entirely down and the first pod is starting, it will just exit as it can’t connect to the others. My first approach was to keep the other nodes down and use kubectl edit rc/galera-node-X and set replicas to 0. But then the service is still exporting the information. In the end I deleted the srv/galera-node-X and waited for the first pod to start. Once it was up I could re-create the services again.

My next steps are to add proper health checks, some monitoring and see if there is a more long term archive for the log data of a (deleted) pod.

 

Starting to use the Galera cluster

Starting to use the Galera cluster

In my previous post I wrote about getting a MariaDB Galera cluster  started on Kubernetes. One of my open issues was how to get my existing VM to connect to it. With Microsoft Azure the first thing is to add Network peering between the Kubernetes cluster and the normal VM network. As previously mentioned the internal IPv4 address of the Galera service is not reachable from outside and the three types of exposing a service are:

  • LoadBalancer
  • ClusterIP
  • NodePort

While the default Microsoft Azure setup already has two LoadBalancers, the kubectl expose –type=LoadBalancer command does not seem to allow me to chose which load balancer to use. So after trying this command my Galera cluster was reachable through a public IPv4 address on the standard MySQL port. While it is password protected it didn’t seem like a good idea. To change the config you can use something like kubectl edit srv/galera-cluster and change the type to another one. Then I tried the NodePort type and got the MySQL port exposed on all masters and thanks to the network peering was able to connect to them directly. Then I manually modified the already configured/created Microsoft Azure LoadBalancer for the three masters to export port 3306 and map it to the internal port. I am also doing a basic health check which checks if port 3306 can be connected to.

Now I can start using the Galera cluster from my container based deployment before migrating it fully to Kubernetes. My next step is probably to improve the health checks to only get primaries listed in the LoadBalancer and then add monitoring to it as well.

Galera on Kubernetes

Galera on Kubernetes

As part of my journey to “cloud” computing I built a service that is using MySQL and as preparation for the initial deployment I set myself the following constraints:

  • Deploy in containers
  • Be able to tolerate some failure of ” VM”s
  • Be able to grow/replace storage without downtime

Containers

There are pre-made mariadb:10.1 containers but to not rely on a public registry I have used the Microsoft Azure Container Service to upload my container. The integration into the standard docker tools to create and upload containers just worked. It allows me to give a place for modified containers as well.

Cluster

With Azure it doesn’t seem possible to online resize (grow) a volume and if I ever want to switch from ext4 to xfs (or zfs?) I should run some form of fault tolerant MySQL to take a node and upgrade it. These days MariaDB 10.1 includes Galera support and besides some systematic issues (which I don’t seem to run in as I have little to no transactions) it seems quite easy to set-up.

Fault tolerance

Fault tolerance comes in a couple flavors. Galera is a multi-master database where the cluster will continue to allow writes as long as there is a majority of active nodes. If I start with three nodes, I can take one off the cluster to maintain.

Kubernetes will reschedule a pod/container to a different machine (“agent”) in case one becomes unhealthy and it will expose the Galera cluster through a LoadBalancer and a single IPv4 address for it. This means only active members of the cluster will be contacted.

The last part is provided by Microsoft Azures availability set. Distributing the Agents into different zones should prevent all of them to go down at the same time during maintenance.

So in theory this looks quite nice, only practice will tell how this will play out.

Set-up

After having picked Microsoft Azure, Kubernetes and Galera, it is time to set it up. I have started with an example found here. I had to remove some labels to make it work with the current format, moved the container to mariadb:10.1 and modified the default config.

I had to look a bit on how to get persistent storage. I am directly mounting the disk for the pod an alternative is a persistent volume claim. This might be a better approach.

The biggest issue is starting the first service. It requires to pass special parameters to initialize the cluster and involved a round of kubectl edit/kubectl delete to get it up. Having the second and third member join was more easy.

Challenges/TODOs

Besides having to gain more experience with it, I do face a couple of problems with this setup and need to explore solutions (or wait for comments?).

I deployed my application before having a Kubernetes cluster and now need to migrate. The default networking of Kubernetes works by adding a lot of masquerading entries on agents and masters. In the cluster these addresses are routable by masquerading but from external they are not reachable. I need to find a way to access it, probably by sacrificing some redundancy first. The other option is to use kubectl expose but I don’t want my cluster to have a public IPv4 address. I need to see how to have an internal load balancer with a private/internal IPv4 address.

Galera cluster management is a bit troubling. The first time I start with a new disk it will not properly connect to the master but would register itself to the LoadBalancer/Service. I manually need to do a kubectl delete of the pod and wait for it to reschedule. That is probably easy to fix. The second part of the problem is that I should use health checks and only register the pod once it has connected and synced to the primaries.

Rolling upgrades seem to have a systematic issue too. The default way for the built-in replication controller looks like a new pod (N+1) will be launched and brought up and then the current galera node will be stopped (back to N). This falls apart with the way I mount the storage/disk. E.g. the new pod can not mount the disk as it is already mounted and the old pod will not be deleted.

Least problematic is auto-scaling. In the example set-up each node is a service by itself, using one persistent disk. It makes scaling the cluster a bit difficult. I can add new nodes and they will discover the master(s) but to have the masters remember the new nodes, I would need to have the pods recycle.

 

Kubernetes on Microsoft Azure

Kubernetes on Microsoft Azure

The recent Amazon S3 outage should make a strong argument that centralized services have severe issues, technically but from a business point of view as well(you don’t own the destiny of your own product!) and I whole heartily agree with “There is no cloud, it’s only someone else’s computer”. 

Still from time to time I like to see beyond my own nose (and I prefer the German version of that proverb!) and the current exploration involves ReactJS (which I like), Tensorflow (which I don’t have enough time for) and generally looking at Docker/Mesos/Kubernetes to manage services, zero downtime rolling updates. I have browsed and read the documentation over the last year, like the concepts (services, replication controller, pods, agents, masters), planned how to use it but because it doesn’t support SCTP never looked into actually using it.

Microsoft Azure has the Azure Container Services and since end of February it is possible to create Kubernetes clusters. This can be done using the v2 of the Azure CLI or through the portal. I finally decided to learn some new tricks.

Azure asks for a clientId and password and I entered garbage and hoped the necessary accounts would be created. It turns out that the portal is not creating it and also not doing a sanity check of these credentials and second when booting the master it will not properly start. The Microsoft support was very efficient and quick to point that out. I wish the portal would make a sanity check though. So make sure to create a principal first and use it correctly. I ended up creating it on the CLI.

I re-created the cluster and executed kubectl get nodes. It started to look better but one agent was missing from the list of nodes. After logging in I noticed that kubelet was not running. Trying to start it by hand shows that docker.service is missing. Now why it is missing is probably for Microsoft engineering to figure out but the Microsoft support gave me:

sudo rm -rf /var/lib/cloud/instances

sudo cloud-init -d init

sudo cloud-init -d modules -m config

sudo cloud-init -d modules -m final

sudo systemctl restart kubelet

After these commands my system would have a docker.service, kubelet would start and it will be listed as a node. Commands like kubectl expose are well integrated and use a public IPv4 address that is different from the one used for ssh/management. So all in all it was quite easy to get a cluster up and I am sure that some of the hick-ups will be fixed…

State of structured text for technical documentation

State of structured text for technical documentation

A long time ago I wrote the OpenEmbedded User Manual and back then the obvious choice was to make it a docbook. In my community there were plenty of other examples that used docbook and it helped to get started. The great thing of docbook was with one XML input one could generate output in many different formats like HTML, XHTML, ePub or PDF. It separated the format from the presentation and was tailored for technical documents and articles with advanced features like generating a change history, appendix and many more. With XML entities it was possible to share chapters and parts between different manuals.

When creating Sysmocom and starting to write our usermanuals we have continued to use docbook. After all besides the many tags in XML it is a format that can be committed to git, allowing review and the publishing is just like a software build and can be triggered through git.

On the other hand writing XML by hand, indenting paragraphs to match the tree structure of the document is painful. In hindsight writing a docbook feels more like writing xml tags than writing content. I started to look for alternatives and heard about asciidoc, discarded it and then re-evaluated and started to use it as default. The ratio of content to formatting is really great. With a2x we continued to use docbook/dblatex to render the document. With some trial and error we could even continue to use the docbook docinfo (-a docinfo and a file manual-docinfo.xml). And finally asciidoc can be used on github as well. It works by adding .adoc to the filename and will be rendered nicely.

So with asciidoc, restructured text (rst), markdown (md) and many more (textile, pillar, …) we have great tools that make it easier to focus on the content and have an okay look. The downside is that there are so many of them now (and incompatible dialects). This leads to rendering tools having big differences, e.g. not being able to use a docinfo for PDF generation, being able to add raw PDF commands, etc.

I am currently exploring to publish documentation on readthedocs.org and my issue is that they are using Python sphinx which only works with markdown or restructure text. As github can’t

In the attempt to pick-up users where they are I am exploring to use readthedocs.org as an additional channel for documents. The website can integrate with github to automatically rebuild the documentation. One issue is that they exclusively use Python sphinx to render the documentation and that means it needs to use rst or markdown (or both) as input.

I can go down the xkcd way and create a meta-format to rule them all. Try to use pandoc to convert these documents on the fly (but pandoc already had some issues with basic tables rst) or switch the format. I looked at rst2pdf but while powerful seems to be lacking the docinfo support and markdown. I am currently exploring to stay with asciidoc and then use asciidoc -> docbook -> markdown_github for readthedocs. Let’s see how far this gets.

Starting with a Diameter stack

Starting with a Diameter stack

Going from 2G/3G requires to learn a new set of abbreviations. The network is referred to as IP Multimedia Subsystem (IMS) and the HLR becomes Home subscriber server (HSS). ITU ASN1 to define the RPCs (request, response, potential errors), message structure and encoding in 2G/3G is replaced with a set of IETF RFCs. From my point of view names of messages, names of attributes change but the basic broken trust model remains.

Having worked on probably the best ASN1/TCAP/MAP stack in Free Software it is time to move to the future and apply the good parts and lessons learned to Diameter. The first RFC is to look at is RFC 6733 – Diameter Base Protocol. This defines the basic encoding of messages, the requests, responses and errors, a BNF grammar to define these messages, when and how to connect to remote systems, etc.

The core part of our ASN1/TCAP/MAP stack is that the 3GPP ASN1 files are parsed and instead of just generating structs for the types (like done with asn1c and many other compilers) we have a model that contains the complete relationship between application-context, contract, package, argument, result and errors. From what I know this is quite unique (at least in the FOSS world) and it has allowed rapid development of a HLR, SMSC, SCF, security research and more.

So getting a complete model is the first step. This will allow us to generate encoders/decoders for languages like C/C++, be the base of a stack in Smalltalk, allow to browse the model graphically, generate fancy pictures, …. The RFC defines a grammar of how messages and grouped Attribute-Value-Pairs (AVP) are formatted and then a list of base messages. The Erlang/OTP framework has then extended this grammar to define a module and relationships between modules.petitparser_diameter

I started by converting the BNF into a PetitParser grammar. Which means each rule of the grammar becomes a method in the parser class, then one can create a unit test for this method and test the rule. To build a complete parser the rules are being combined (and, or, min, max, star, plus, etc.) with each other. One nice tool to help with debugging and testing the parser is the PetitParser Browser. It is pictured above and it can visualize the rule, show how rules are combined with each other, generate an example based on the grammar and can partially parse a message and provide debug hints (e.g. ‘::=’ was expected as next token).

After having written the grammar I tried to parse the RFC example and it didn’t work. The sad truth is that while the issue was known in RFC 3588, it has not been fixed. I created another errata item and let’s see when and if it is being picked up in future revisions of the base protocol.

The next step is to convert the grammar into a module. I will progress as time permits and contributions are more than welcome.

Analyze cellular problems using Quectel modules

Analyze cellular problems using Quectel modules

Introduction

Previously I have written about connectivity options for IoT devices and today I assume that a cellular technology (e.g. names like GSM, 3G, UMTS, LTE, 4G) has been chosen. Unless you are a big vendor you will end up using a module (instead of a chipset) and either you are curious what the module is doing behind its AT command interface or you are trying to understand a real problem. The following is going to help you or at least be entertaining.

The xgoldmon project was a first to provide air interface traces and logging to the general public but it was limited to Infineon baseband (and some Gemalto devices), needed special commands to enable and didn’t include all messages all the time.

In the last months I have intensively worked with modules of a vendor called Quectel. They are using Qualcomm chipsets and have built the GSM/UMTS Quectel UC20 and the GSM/UMTS/LTE Quectel EC20 modules. They are available as a variant to solder but for speeding up development they provide them as miniPCI express as well. I ended up putting them into a PCengines APU2, soldered an additional SIM card holder for the second SIM card, placed U.FL to SMA connectors and put it into one of their standard cases. While the UC20 and EC20 are pretty similar the software is not the same and some basic features are missing from the EC20, e.g. the SIM ToolKit support. The easiest way to acquire these modules in Europe seems to be through the above links.

The extremely nice feature is that both modules export Qualcomm’s bi-directional DIAG debug interface by USB (without having to activate it through an undocumented AT command). It is a framed protocol with a simple checksum at the end of a frame and many general (e.g. logging and how regions are described) types of frames are known and used in projects like ModemManager to extract additional information. Some parts that include things like Tx-power are not well understood yet.

I have made a very simple utility available on github that will enable logging and then convert radio messages to the Osmocom GSMTAP protocol and send it to a remote host using UDP or write it to a pcap file. The result can be analyzed using wireshark.

Set-up

You will need a new enough Linux kernel (e.g. >= Linux 4.4) to have the modems be recognized and initialized properly. This will create four ttyUSB serial devices, a /dev/cdc-wdmX and a wwanX interface. The later two can be used to have data as a normal network interface instead of launching pppd. In short these modules are super convenient to add connectivity to a product.

apu2_quectel_uc20_ec20
PCengines APU2 with Quectel EC20 and Quectel UC20

Building

The repository includes a shell script to build some dependencies and the main utility. You will need to install autoconf, automake, libtool, pkg-config, libtallocmake, gcc on your Linux distribution.

git clone git://github.com/moiji-mobile/diag-parser
cd diag-parser
./build/build_local.sh

Running

Assuming that your modem has exposed the DIAG debug interface on /dev/ttyUSB0 and you have your wireshark running on a system with the internal IPv4 address of 10.23.42.7 you can run the following command.

./diag-parser -g 10.23.42.7 -i /dev/ttyUSB0

Exploring

Analyzing UMTS with wireshark. The below shows a UMTS capture taken with the Quectel module. It allows you to see the radio messages used to register to the network, when sending a SMS and when placing calls.

diag-parse_rantrace
Wireshark dissecting UMTS
Using docker at the Osmocom CI

Using docker at the Osmocom CI

As part of the Osmocom.org software development we have a Jenkins set-up that is executing unit and system tests. For OpenBSC we will compile the software, then execute the unit tests and finally run a bunch of system tests. The system tests will verify making configuration changes through the telnet interface, the machine control interface, might try to connect to other parts, etc.

In the past this was executed after a committer had pushed his changes to the repository and the build time didn’t matter. As part of the move to the Gerrit code review we execute them before and this means that people might need to wait for the result… (and waiting for a computer shouldn’t be necessary these days).
sysmocom is renting a dedicated build machine to speed-up compilation and I have looked at how to execute the system tests in parallel. The issue is that during a system test we bind to ports on localhost and that means we can not have two test runs at the same time.
I decided to use the Linux network namespace support and opted for using docker to achieve it. There are some hick-ups but in general it is a great step forward. Using a statement like the following we execute our CI script in a clean environment.

$ docker run –rm=true -e HOME=/build -w /build -i -u build -v $PWD:/build osmocom:amd64 /build/contrib/jenkins.sh

As part of the OpenBSC build we are re-building dependencies and thanks to building in the virtual /build directory we can look at archiving libosmocore/libosmo-sccp/libosmo-abis and not rebuild it all the time.