Punched holes in the great firewall?

Punched holes in the great firewall?

I am in Shanghai right now and I was surprised that I could access the forbidden fruits, e.g. m.facebook.com:

 1  172.31.24.1 (172.31.24.1)  1.359 ms  1.079 ms  1.049 ms
 2  180.168.46.17 (180.168.46.17)  1.262 ms  1.180 ms  1.073 ms
 3  10.10.96.145 (10.10.96.145)  29.356 ms  29.781 ms  29.607 ms
 4  10.10.199.179 (10.10.199.179)  40.466 ms  40.220 ms  40.559 ms
 5  203.131.250.221 (203.131.250.221)  40.208 ms  40.581 ms  40.139 ms
 6  ae-0.facebook.chwahk02.hk.bb.gin.ntt.net (203.131.241.62)  41.732 ms  46.681 ms  42.098 ms
 7  psw01a.hkg3.tfbnw.net (173.252.64.176)  42.154 ms
    psw01b.hkg3.tfbnw.net (173.252.64.175)  42.104 ms
    psw01c.hkg3.tfbnw.net (173.252.64.174)  40.703 ms
 8  msw1ad.01.hkg3.tfbnw.net (204.15.21.235)  41.912 ms
    msw1ab.01.hkg3.tfbnw.net (204.15.21.247)  41.140 ms
    msw1as.01.hkg3.tfbnw.net (173.252.65.92)  40.938 ms

 9  edge-star-mini-shv-01-hkg3.facebook.com (31.13.95.36)  42.862 ms  41.607 ms  41.950 ms
So this first goes to an ISP in Shanghai? Then back into a private network and re-surfacing in Hong Kong and then having access to the forbidden fruit? Is that normal routing?
Leaving Berlin, saying hello to Amsterdam

Leaving Berlin, saying hello to Amsterdam

Berlin continues to gain a lot of popularity, culturally and culinarily it is an awesome place and besides increasing rents it still remains more affordable than other cities. In terms of economy Berlin attracts new companies and branches/offices as well. At the same time I felt the itch and it was time to leave my home town once again. In the end I settled for the bicycle friendly (and sometimes sunny) city of Amsterdam.

My main interest remains building reliable systems with Smalltalk, C/C++, Qt and learn new technology (Tensorflow? Rust? ElasticSearch, Mongo, UUCP) and talk about GSM (SCCP, SIGTRAN, TCAP, ROS, MAP, Diameter, GTP) or get re-exposed to WebKit/Blink.
If you are in Amsterdam or if you know people or companies I am happy to meet and make new contacts.
C++, Qt and Treefrog to build user facing web applications

C++, Qt and Treefrog to build user facing web applications

In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework to build a full web application.

You might wonder why one would want to build such a thing in a statically and compiled language instead of something more dynamic. There are a few reasons for it:

  • Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network.
  • Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++.
  • Compilation/Distribution: By (cross-)building the application there is  a “single” executable and we don’t have the dependency mess of Ruby.
The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of “Qt” for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework.
I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my “advanced” topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home.
My favorite features:
  • tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well.
  • The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times.
  • C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect.
If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far.
HNBAP and RANAP support in Osmocom.org

HNBAP and RANAP support in Osmocom.org

Sysmocom is in the process of adding 3G support to OpenBSC. This is done in the nature of using an existing hNodeB Femtocell that exposes the Iuh interface. The binary protocol for Iuh (and related protocols) is defined using the Abstract Syntax Notation One (ASN.1) and the aligned packed encoding rules (APER) are used to encode/decode the data.

After exploring several options LaForge picked the one that adds APER support to Lev Walkin’s ASN1 compiler, simplifies the 3GPP ASN.1 input files and then have a python script to post-process the result.

The next issue comes that we have two protocol suites, HNBAP and RANAP, and want to use them inside the same codebase. To avoid having conflicting types LaForge extended the asn1c compiler to add a prefix to generated types and we are using this patched compiler.

At this point we had an encoder/decoder for the HNBAP and RANAP protocols and could begin on writing our own Free Software HNB-GW. After working with the HNB-GW code Daniel noticed several crashes. The crashes were related to making deep copies of the decoded data and it took several iterations to not leak and not double free.

We have not had crashes, leaks or other issues in this part of the code for quite a bit and it seems time for a formal release. To build the osmo-iuh module you will need:

  • master of libosmocore
  • master of libosmo-abis
  • master of libosmo-netif
  • sysmocom/iu of libosmo-sccp
  • aper-prefix of asn1c
  • master of libasn1c
  • master of osmo-iuh
All of these modules can be found on git.osmocom.org and in osmo-iuh/contrib/jenkins.sh is a simple script to build the code, regenerate the source files and run the tests..
The work has been possible thanks to NLnet
Adding menu items in redmine (e.g. a link to contact information)

Adding menu items in redmine (e.g. a link to contact information)

The Osmocom project is switching from a list of separate trac projects to a single instance of redmine. One German legal requirement is to publish contact information (Impressum) and it needs to be easily discoverable. The easiest seems to be to add this to the top level menu of our redmine. This way it will be available on all pages and just one (or two on mobile to open the menu) clicks away.

The below can be placed in a file called plugins/impressum_plugin/init.rb and will use the Redmine::MenuManager to add an entry to the top level menu. The below is everything that is needed to add an entry. We want the item to be last and point to a wiki page.

# Add a Impressum link and point to a wiki page, add it last
Redmine::MenuManager.map :top_menu do |menu|
    menu.push(:impressum, ‘/projects/cellular-infrastructure/wiki/Contact’, :last => true)

end

osmo-pcap capture on the edge and store in the center

osmo-pcap capture on the edge and store in the center

Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically to improve your network, your roaming traffic, etc.

The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don’t know when is a good time to copy it.

Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed.

I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well.

If you are using or decided not to use this software I would be very happy to hear about it.

A C++ project without Qt

A C++ project without Qt

My primary language is Smalltalk (Pharo and GNU Smalltalk) and I didn’t write much C++1x code yet and for a new project I decided to use C++ and experiment. The task was rather simple, receive a message, check if this message type needs to be re-written, start to parse it, make a MongoDB look-up, patch it, send it.

While looking for MongoDB drivers for Qt I only found the pure C++ driver that is working with std::string, exceptions and only offers a blocking interface. At this point I decided to see how painful using stl is these days and opted for a pure stl project. Many years ago the only online documentation came from SGI, there were no examples, no autocompletion in editors, etc…

The application is relatively trivial. It is using lambdas to hook two components (the receiver and the patcher) together, it is using std::thread with lambdas to spawn worker threads because of the blocking interface of the database driver, std::chrono for some ad-hoc StatsD code to add event counters and performance monitoring.

So how did this experiment go?

The C++11/C++14 documentation that is available has improved over the old times, it is still below the quality of the Qt documentation.

Once deploying this on Debian I immediately had odd crashes and I blame ABI issues as my application used C++1x but the mongo-cxx-driver was apparently not using it. I ended up packaging the latest stable (“legacy”) standalone driver for debian and made sure it is compiled with C++1x.

The std::chrono API has potential (as it allows me to control the resolution) but the API, specially in C++11 without the new literals, makes easy things difficult. All I wanted was something like QTimer::elapsed() or QElapsedTimer::elapsed(). So I think the Qt API is better designed for this usecase (my measurement resolution is milliseconds and not picoseconds).

For socket code I was using pure C/libc. As my database code was blocking I could make my socket code blocking as well.

I normally use vim but when working with an unknown API I totally enjoy the comfort QtCreator is providing. I was able to jump to the header files and explore the interface. I think it has saved me quite some time while wrangling with the stl.

New languages come with their own build tools these days (Rust/Cargo, Ruby/Rake, …) and C++1x doesn’t have anything to offer here. The options I considered where autotools, CMake or qmake. I think autotools are overkill here, I dislike CMake for various reasons and I like the simplicity of qmake.

In the past I had played with the Google C++ Unittest framework and actually liked it a lot. These days one is asked to copy and paste the framework into the code base. It is an ongoing struggle if one should embed third party libraries or not but as my application is rather small I decided it is a no go. I have used and enjoyed the Qt testlib. So my STL only application is tested using QObject and QCoreApplication.

Conclusion:

The STL (and the documentation) has improved. I still prefer the API design of Qt and when not just interfacing with an library that is using std::string Qt is still my first choice (as it provides event loops, signals/slots, networking, http, etc.).

OpenCore and Python moving to Github

OpenCore and Python moving to Github

Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).

Not everything is great though. As a Free Software project one might decide that using proprietary javascript to develop and interact with a Free Software project is not acceptable, one might want to control the repository yourself and then people look for alternatives. At the Osmocom project we are using cgit, mailinglists, patchwork, trac, host our own jenkins and then mirror some of our repositories to github.com for easy access. Another is to find a platform like Github but that is Free and a lot of people look or point to gitlab.com.

From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.

So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.

Build or buy a GSM HLR? Is there an alternative?

Build or buy a GSM HLR? Is there an alternative?

The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in:

  • The vendor might enforce to run on a hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier.
  • It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied.
  • The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions).
  • There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can.
The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP).
But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious:
  • You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide.
  • The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires)
  • Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it.
  • Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well.
The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies.
osmo-pcu and a case for Free Software

osmo-pcu and a case for Free Software

Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn’t want to debug the PCU issues right now.

This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues.

Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought!