OpenCore and Python moving to Github

OpenCore and Python moving to Github

Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).

Not everything is great though. As a Free Software project one might decide that using proprietary javascript to develop and interact with a Free Software project is not acceptable, one might want to control the repository yourself and then people look for alternatives. At the Osmocom project we are using cgit, mailinglists, patchwork, trac, host our own jenkins and then mirror some of our repositories to github.com for easy access. Another is to find a platform like Github but that is Free and a lot of people look or point to gitlab.com.

From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.

So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.

Build or buy a GSM HLR? Is there an alternative?

Build or buy a GSM HLR? Is there an alternative?

The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in:

  • The vendor might enforce to run on a hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier.
  • It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied.
  • The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions).
  • There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can.
The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP).
But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious:
  • You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide.
  • The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires)
  • Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it.
  • Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well.
The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies.
osmo-pcu and a case for Free Software

osmo-pcu and a case for Free Software

Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn’t want to debug the PCU issues right now.

This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues.

Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought!

Using GNU autotest for running unit tests

Using GNU autotest for running unit tests

This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest.

The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will focus about it in this blog post.

GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake’s make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build.

The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED.

The following will go through the details of enabling autotest in a project.

Enabling GNU autotest

The configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal

Integrating with the automake

The next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process.

 # The `:;' works around a Bash 3.2 bug when the output is not writeable.  
 $(srcdir)/package.m4: $(top_srcdir)/configure.ac  
  :;{   
         echo '# Signature of the current package.' &&   
         echo 'm4_define([AT_PACKAGE_NAME],' &&   
         echo ' [$(PACKAGE_NAME)])' &&;   
         echo 'm4_define([AT_PACKAGE_TARNAME],' &&   
         echo ' [$(PACKAGE_TARNAME)])' &&   
         echo 'm4_define([AT_PACKAGE_VERSION],' &&   
         echo ' [$(PACKAGE_VERSION)])' &&   
         echo 'm4_define([AT_PACKAGE_STRING],' &&   
         echo ' [$(PACKAGE_STRING)])' &&   
         echo 'm4_define([AT_PACKAGE_BUGREPORT],' &&   
         echo ' [$(PACKAGE_BUGREPORT)])';   
         echo 'm4_define([AT_PACKAGE_URL],' &&   
         echo ' [$(PACKAGE_URL)])';   
        } &>'$(srcdir)/package.m4'  
 EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE)  
 TESTSUITE = $(srcdir)/testsuite  
 DISTCLEANFILES = atconfig  
 check-local: atconfig $(TESTSUITE)  
  $(SHELL) '$(TESTSUITE)' $(TESTSUITEFLAGS)  
 installcheck-local: atconfig $(TESTSUITE)  
  $(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)'   
  $(TESTSUITEFLAGS)  
 clean-local:  
  test ! -f '$(TESTSUITE)' ||   
  $(SHELL) '$(TESTSUITE)' --clean  
 AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te  
 AUTOTEST = $(AUTOM4TE) --language=autotest  
 $(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4  
  $(AUTOTEST) -I '$(srcdir)' -o $@.tmp $@.at  
  mv $@.tmp $@  

Defining a testsuite

The next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below:
 AT_INIT  
 AT_BANNER([Regression tests.])  
 AT_SETUP([gsm0408])  
 AT_KEYWORDS([gsm0408])  
 cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout  
 AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], [], [expout], [ignore])  
 AT_CLEANUP  
This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output.

Executing a testsuite and dealing with failure

The testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following.

 $ make testsuite  
 make: `testsuite' is up to date.  
 $ ./testsuite  
 ## ---------------------------------- ##  
 ## openbsc 0.13.0.60-1249 test suite. ##  
 ## ---------------------------------- ##  
 Regression tests.  
  1: gsm0408                     ok  
  2: db                       ok  
  3: channel                     ok  
  4: mgcp                      ok  
  5: gprs                      ok  
  6: bsc-nat                     ok  
  7: bsc-nat-trie                  ok  
  8: si                       ok  
  9: abis                      ok  
 ## ------------- ##  
 ## Test results. ##  
 ## ------------- ##  
 All 9 tests were successful.  
In case of a failure the following information will be printed and can be inspected to understand why things went wrong.
  ...  
  2: db                       FAILED (testsuite.at:13)  
 ...  
 ## ------------- ##  
 ## Test results. ##  
 ## ------------- ##  
 ERROR: All 9 tests were run,  
 1 failed unexpectedly.  
 ## -------------------------- ##  
 ## testsuite.log was created. ##  
 ## -------------------------- ##  
 Please send `tests/testsuite.log' and all information you think might help:  
   To: 
   Subject: [openbsc 0.13.0.60-1249] testsuite: 2 failed  
 You may investigate any problem if you feel able to do so, in which  
 case the test suite provides a good starting point. Its output may  
 be found below `tests/testsuite.dir'.  
You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng.
Interested in MIPS/UCLIBC/DirectFB becoming a Tier1 platform?

Interested in MIPS/UCLIBC/DirectFB becoming a Tier1 platform?

Are you running Qt on a MIPS based system? Is your toolchain using UCLIBC? Do plan to use Qt with DirectFB? If not you can probably stop reading.

During the Qt5 development the above was my primary development platform and I spent hours improving the platform and the Qt support. I descended down to the kernel and implemented (and later moved) userspace callchain support for MIPS [1][2] in perf. This allows to get stacktraces/callchains for userspace binaries even when there is no framepointer. I stress-tested the DirectFB platform plugin and found various issues in DirectFB, e.g. this memleak. I modified the V8 MIPS JIT to provide the necessary routines for QML. While doing this I noticed that the ARM implementation is broken and helped to fix it.

At the time Nokia was still using Puls. This meant that getting an external build to integrate with their infrastructure was not possible. So I started to setup a Jenkins for DirectFB and Qt myself. The Qt Jenkins is compiling QtBase, QtJsBackend, QtXmlPatterns, QtDeclarative and QtWebKit for MIPS/Linux/UCLIBC. On top of these there a daily builds for the various QtBase configurations (dist, large, full, medium, small, minimal) and running the V8 unit tests using the built-in simulator for ARM and MIPS. The goal was to extend this to run the all the Qt tests on real hardware. The unit that supported my work was shut-down before I could implement it and the platform work has mostly been in maintenance mode since then.

This has all worked nicely for the release up to Qt 5.0 but when Qt5.1 got merged into the stable branch and received some updates the build started to break and I don’t have enough spare time to fix that.

If anyone is interested in either taking over the CI or helping to make this part of my work again I would be very happy.

Migrating *.osmocom.org trac installations to a new host

Migrating *.osmocom.org trac installations to a new host

Yesterday I migrated all trac installations but openbsc.osmocom.org to a new host. We are now running trac version 0.12 and all the used plugins should be installed. As part of the upgrade all tracs should be available via https.

There are various cleanups to do in the next couple of weeks. We should run a similar trac.ini on all the installations, we need to migrate from SQLite to MySQL/MariaDB, all login pages/POSTS should redirect to the https instead of doing a POST/basic auth in plain text.

We are now using a frontend nginx and the /trac/chrome/* are served from a cache and your browser is asked to cache them for 90 hours. This should already reduce the load on the server a bit and should result in better page loads.

Know your tools – mudflap

Know your tools – mudflap

I am currently implementing GSM ARFCN range encoding and I do this by writing the algorithm and a test application. Somehow my test application ended in a segmentation fault after all tests ran. The first thing I did was to use gdb on my application:

$ gdb ./si_test
(gdb) r
...
Program received signal SIGSEGV, Segmentation fault.
0x00000043 in ?? ()
(gdb) bt
#0  0x00000043 in ?? ()
#1  0x00000036 in ?? ()
#2  0x00000040 in ?? ()
#3  0x00000046 in ?? ()
#4  0x00000009 in ?? ()
#5  0xb7ff6821 in ?? () from /lib/ld-linux.so.2

The application crashed somewhere in glibc on the way to the exit. The next thing I used was valgrind but it didn’t report any invalid memory access so I had to resort to todays tool. It is called mudflap and part of GCC for a long time. Let me show you an example and then discuss how valgrind fails and how mudflap can help.

int main(int argc, char **argv) {
  int data[23];
  data[24] = 0;
  return 0;
}

The above code obviously writes out of the array bounds. But why can’t valgrind detect it? Well we are writing somewhere to the stack and this stack has been properly allocated. valgrind can’t know that &data[24] is not part of the memory to be used by data.

mudflap comes to the rescue here. It can be enabled by using -fmudflap and linking to -lmudflap this will make GCC emit extra code to check all array/pointer accesses. This way GCC will track all allocated objects and verify the access to memory before doing it. For my code I got the following violation.

mudflap violation 1 (check/write): time=1350374148.685656 ptr=0xbfd9617c size=4
pc=0xb75e1c1e location=`si_test.c:97:14 (range_enc_arfcns)'
      /usr/lib/i386-linux-gnu/libmudflap.so.0(__mf_check+0x3e) [0xb75e1c1e]
      ./si_test() [0x8049ab5]
      ./si_test() [0x80496f6]
Nearby object 1: checked region begins 29B after and ends 32B after
mudflap object 0x845eba0: name=`si_test.c:313:6 (main) ws'
I am presented with the filename, line and function that caused the violation, then I also get a backtrace, the kind of violation and on top of that mudflaps informs me which objects are close to the address I allocated. So in this case I was writing to ws outside of the bounds.
OpenBSC/Osmocom continuous integration with Jenkins

OpenBSC/Osmocom continuous integration with Jenkins

This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on continuous integration with Jenkins.

Problem

When making a new release we often ran into the problem that files were missing from the source archive. The common error was that the compilation failed due some missing header files.
The second problem came a bit later. As part of the growth of OpenBSC/Osmocom we took code from OpenBSC and moved it into a library called libosmocore to be used by other applications. In the beginning our API and ABI of this new library was not very stable. One thing that could easily happen is that we updated the API, migrated OpenBSC to use the new API but forgot to update one of the more minor projects, e.g. our TETRA decoder. 

Solution

The solution is quite simple. The GNU Automake buildsystem already provides a solution to the first problem. One simply needs to call make distcheck. This will create a new tarball and then build it. Ideally all developers use make distcheck before pushing a change into our repository but in reality it takes too much to do this and one easily forgets this step.
Luckily CPU time is getting more and more affordable. This means that we can have a system that will run make distcheck after each commit. To address the second part of the problem we can rebuild all users of a specific library, and do this recursively.
The buzzword for this is Continuous Integration and the system of our choice is Jenkins (formerly known as Hudson). Jenkins has the concept of a Job and a Node. A Job can be building a certain project, e.g. building libosmocore. A Node is a physical system with a specific compiler. A Job can instruct Jenkins to monitor our git repositories and then schedule the job to be build.
In our case we have nodes for FreeBSD/AMD64, Debian6.0/i386 and mingw/i386. All our projects are multi-configuration projects. For some of our Jobs we use it to build the software on FreeBSD, Debian and mingw for others only on Debian. Another useful feature is the matrix build. This way one job can build several different configurations, e.g. debug and release.
Jenkins allows us to have dependencies between Jobs and we are using this to rebuild the users of a library after a change, e.g. build libosmo-abis after libosmocore.
The build-status can be reported by EMail, irc but I generally use the RSS feed feature to find out about broken builds. This way I will be made aware of build breakages and can escalate it by talking to the developer that has caused the breakage.
Jenkins of Osmocom

Conclusion

The installation of Jenkins makes sure that the tarballs built with make dist contains everything needed to build the software package and we have no silent build breakages in less active sub-projects. A nice side-effect is that we have less Emails from users due build breakages. Setting up Jenkins is easy and everyone building software should have Jenkins or a similar tool.

Outlook

We could have more build nodes for more Linux distributions and versions. This mainly depends on volunteers donating CPU time and maintaining the Jenkins Node. Jenkins offers a variety of plugins and it appears to be easy to write new plugins. We could have plugins that monitor and plot the binary size of our libraries, check for ABI breakages, etc.
Testing in OpenBSC and Osmocom

Testing in OpenBSC and Osmocom

The OpenBSC and Osmocom project has grown a lot in recent years. It has grown both in people using our code, participating in the development and also in terms of amount of sourcecode. As part of the growth we have more advanced testing and the following blog posts will show what we are doing.

Each post will describe the problems we were facing and how the system deployed is helping us to resolve these issues.

Device profiles in Qt5

Device profiles in Qt5

OpenGL and Devices

The future of Qt’s graphic stack is OpenGL (ES 2.0), but this makes things more complicated in the device space. The library names and low level initialization needed for OpenGL is not standardized. This means that for a given board one needs to link libQtGui to different libraries and one needs to patch the QPA platform plugins to add device specific bits. The GPU vendor might provide DirectFB/eGL integration but one needs to call a special function to bind eGL to DirectFB.

Historic Approach

The historic Qt approach is to keep patches out of tree, custom mkspecs files that need to be copied into Qt before building. I have had two issues with this approach:
  1. Device support should be an essential part of Qt5.
  2. Build testing (and later unit testing) is more complicated.

Device Profile Proposal

The approach we took is a pragmatic one, it should be easy for device manufacturers to do the right thing, it should not be a burden for the maintainability of Qt. After some iterations we ended up with device profile proposal and began to implement to it. Most of it is merged by now.

Key Features

It begins with the ./configure script that now has the -device=DEVICE and -device-option KEY=VALUE to select a device and to pass options, e.g. additional include paths, BSP package, to Qt. The second part is a device to influence the behavior of QPA platform plugins. Right now this applies to the DirectFB and EGLFS plugin. A device can install hooks that are called as part of the initialization of these plugins. The hook is the pragmatic approach to get a mix-in with the existing code base.

Supported Devices

Right now we have completed device support for the RaspberryPi,BCM97425 and AMLogic 8726-M. We do support some more devices but they might still require external patches.