Interested in MIPS/UCLIBC/DirectFB becoming a Tier1 platform?

Interested in MIPS/UCLIBC/DirectFB becoming a Tier1 platform?

Are you running Qt on a MIPS based system? Is your toolchain using UCLIBC? Do plan to use Qt with DirectFB? If not you can probably stop reading.

During the Qt5 development the above was my primary development platform and I spent hours improving the platform and the Qt support. I descended down to the kernel and implemented (and later moved) userspace callchain support for MIPS [1][2] in perf. This allows to get stacktraces/callchains for userspace binaries even when there is no framepointer. I stress-tested the DirectFB platform plugin and found various issues in DirectFB, e.g. this memleak. I modified the V8 MIPS JIT to provide the necessary routines for QML. While doing this I noticed that the ARM implementation is broken and helped to fix it.

At the time Nokia was still using Puls. This meant that getting an external build to integrate with their infrastructure was not possible. So I started to setup a Jenkins for DirectFB and Qt myself. The Qt Jenkins is compiling QtBase, QtJsBackend, QtXmlPatterns, QtDeclarative and QtWebKit for MIPS/Linux/UCLIBC. On top of these there a daily builds for the various QtBase configurations (dist, large, full, medium, small, minimal) and running the V8 unit tests using the built-in simulator for ARM and MIPS. The goal was to extend this to run the all the Qt tests on real hardware. The unit that supported my work was shut-down before I could implement it and the platform work has mostly been in maintenance mode since then.

This has all worked nicely for the release up to Qt 5.0 but when Qt5.1 got merged into the stable branch and received some updates the build started to break and I don’t have enough spare time to fix that.

If anyone is interested in either taking over the CI or helping to make this part of my work again I would be very happy.

Migrating *.osmocom.org trac installations to a new host

Migrating *.osmocom.org trac installations to a new host

Yesterday I migrated all trac installations but openbsc.osmocom.org to a new host. We are now running trac version 0.12 and all the used plugins should be installed. As part of the upgrade all tracs should be available via https.

There are various cleanups to do in the next couple of weeks. We should run a similar trac.ini on all the installations, we need to migrate from SQLite to MySQL/MariaDB, all login pages/POSTS should redirect to the https instead of doing a POST/basic auth in plain text.

We are now using a frontend nginx and the /trac/chrome/* are served from a cache and your browser is asked to cache them for 90 hours. This should already reduce the load on the server a bit and should result in better page loads.

AQBanking with a Deutsche Bank WebSign Card

AQBanking with a Deutsche Bank WebSign Card

When I opened an account with the Deutsche Bank I requested a WebSign card. This card has been mostly unused until yesterday when I decided it is time to try it. In theory AQBanking should support this card and everything should work flawlessly but in practice I had to spent several hours in the setup.

Basics

The biggest issue is that most of the available documentation is for older aqbanking versions and I couldn’t find a changelog describing how to do the old thing with the new software. So whenever you see a guide using aqhbci-tool you can stop reading as this is for an old version and the commands do not exist in the new one. I understand that the 3rd party documentation is outside of the control of the developer of aqbanking but it would be nice if he could just provide the documentation himself.
I am doing this on Debian Unstable as of the 2.3.2013 and the aqbanking libraries are of version 5.0.24-3 and the libchipcard is version 5.0.3beta-2. I am getting to the exact plugins in a second.

The other part is that the WebSign card is fully configured. There is no requirement for you to download a key into the card or such.

IniLetter

The Deutsche Bank might send you a Ini-Letter, I have done this almost two years ago so I do not remember the details. The AQBanking manual appears to have well described in chapter 6.3.2. I think I followed these instructions back then.

Installation 

The WebSign card is a starcoscard token for AQBanking. To be able to use it you will need to install the libchipcard library. If you only do this you will be greeted with a meaningless error message in the UI asking you to install the libchipcard library. What you actually need are the plugins. In Debian Unstable the package is called libchipcard-libgwenhywfar60-plugins. I have also installed the libchipcard-tools and you should do so too.
The next thing you should do is to check that you have the right card and that you installed everything. I am using a OmniKey card reader and I issued the following command:

$ pcsc_scan
PC/SC device scanner
V 1.4.21 (c) 2001-2011, Ludovic Rousseau
Compiled with PC/SC lite version: 1.8.7
Using reader plug’n play mechanism
Scanning present readers…
0: OMNIKEY AG CardMan 3021 00 00
Sat Mar  2 10:27:23 2013
Reader 0: OMNIKEY AG CardMan 3021 00 00
  Card state: Card inserted,
  ATR: 3B B7 94 00 81 31 FE 65 53 50 4B 32 33 90 00 D1

Possibly identified card (using /usr/share/pcsc/smartcard_list.txt):
3B B7 94 00 81 31 FE 65 53 50 4B 32 33 90 00 D1
Giesecke & Devrient Starcos 2.3
Deutsche Bank WebSign (RSA-Card)
G&D StarSign Token

The output shows that the card reader and the card were detected. This means we can continue and check if the libchipcard installation is complete. I am using the gct-tool to show me my user credentials. These include the User-Id and the IP address to use for the Deutsche Bank. I used the following command:

$ gct-tool showuser -t starcoscard
===== Enter Password =====
Please enter the access password for
CARD_ID
You must only enter numbers, not letters.
Input: ENTER_PIN
————————————————-
Context 1
Service        : BLZ
User Id        : USER_ID
Peer Id        : PEER_ID
Address        : IP
Port           : 3000
System Id      :
Sign Key Id    : A
Verify Key Id  : B
Encipher Key Id: C
Decipher Key Id: D
….

In case you enter the wrong PIN code you have 7 more attempts to enter the right one before the card will be blocked. You will need to use the –forcepin to enter it again. Some other utilities of aqhbci-tool4 appear to become unusable once you have entered the wrong pin. If you do not get the above you are most likely missing the starcoscard plugin.

Configuration

Now that the card is known to work one needs to configure the AQBanking. With the qbankmanager and gnucash I had the issue that no dialogue was presented. So we are going to do this from the console. With the information from above and some knowledge about your bank account (other banking software is capable to take everything from the card) you can use the aqhbci-tool4 to add your user.

$ aqhbci-tool4 adduser -t starcoscard –context=1 -b BLZ  -c ACCOUNT_NR -N YOUR_NAME –hbciversion=300

This will add a new user that will use context #1 of a starcoscard. By default aqhbci-tool4 would select a lower version of HBCI and would use the USER_ID for the customer name. You can verify that the setup is working by importing the accounts and getting the sysid.

$ aqhbci-tool4 getsysid
Locking users
Locking user USER
Executing HBCI jobs
AqHBCI started
Connecting to bank…
Connecting to “IP”
Connected to “IP”
Connected.
There are no tan method descriptions (yet), trying One-Step TAN.
Encoding queue
===== Enter Password =====
Please enter the access password for
CARD_NR
You must only enter numbers, not letters.
Input: ENTER_PIN
Sending queue
Waiting for response
Response received
HBCI: 0010 – Nachricht entgegengenommen. (M)
HBCI: 0020 – Dialogintialisierung erfolgreich. (M)
HBCI: 0020 – Auftrag ausgeführt. (S)
HBCI: 1050 – UPD nicht mehr aktuell. Aktuelle Version folgt. (S)
HBCI: 0020 – Information fehlerfrei entgegengenommen. (S)
Encoding queue
Sending queue
Waiting for response
Response received
HBCI: 0010 – Nachricht entgegengenommen. (M)
HBCI: 0100 – Dialog beendet. (S)
Disconnecting from bank…
Disconnected.
AqHBCI finished.

 If the above fails something is still wrong with your setup. But if it looks like the above you can use the qbankmanager to initiate bank transfers. I hope the above saves someone else the time I had to spent reading the outdated information. In the end it is quite easy to setup.

What is wrong with DHL (and DHL Express)

What is wrong with DHL (and DHL Express)

The last two days my frustration with DHL grew. Let me share with you why. The German Post has acquired DHL and while most of the world thinks of DHL as DHL Express there are two kind of DHL. DHL Express for express delivery and DHL for normal/slow shipping.

DHL Express:

  • It is a very unresponsive company. They have two kind of customer numbers. You need the national number to receive international orders and send national orders and the international one to send international orders. We have asked at least six times to get an international number and it appears that they couldn’t care less.
  • I once ordered a DHL Express pickup and paid the driver in cash, for some reason the price was a lot higher than advertised online. The explanation was that the price I paid included VAT. I have used the phone, email, snail mail and fax to ask for an invoice so I can get the VAT refund as a company. To this date I have not received it.
  • One can buy a DHL Express label online, this requires one to pick the country to ship from. The German webpage is lacking the German translation for months and they don’t bother to fix it.

DHL:

  • The good thing about DHL is the concept of pack stations. I can buy shipping coupons online and then use them as I need to ship a package of the class and print the label. I can post the package without queuing at a machine called packstation.
  • DHL allows to upload CSV files for the destination address, one can even embed the coupons but there are various stupid things. It is using Latin1 as encoding. If one attempts to put an address in Chinese funny things will happen. The street name length limit is way too low for various countries (e.g. for the Philippines). The other part is that one can not specify everything one needs to specify for international orders.
  • When ordering a shipping label online one can enter a phone number for the receiver, e.g. if one decides to use the local number (e.g. 022323424) DHL decides to prefix this with 0049 and remove the leading zero. How likely is it that if I ship to the Philippines that the number of the receiver is a German number?
  • Entering Chinese characters as destination address is working but when attempting to print the shipping label one gets empty boxes instead of Chinese characters. They have chosen to use a custom font that doesn’t have glyphs outside of latin1.
  • The DHL packstation runs very basic software, it has a touchscreen, barcode scanner and can open doors. In summer one can hear the fan spinning so heavily. What a waste of resources.
  • The software is a joke. One can post several packages and afterwards one can print proofs that the items were posted. This machine asks for each receipt to be printed individually but doesn’t indicate which package it belongs. Sure I want you to print receipt 20, and no I really don’t want to be asked if you should print _all_ receipts. The other nice thing is that printing can fail because the machine is out of paper and then just exits without giving you any proof.
Know your tools – mudflap

Know your tools – mudflap

I am currently implementing GSM ARFCN range encoding and I do this by writing the algorithm and a test application. Somehow my test application ended in a segmentation fault after all tests ran. The first thing I did was to use gdb on my application:

$ gdb ./si_test
(gdb) r
...
Program received signal SIGSEGV, Segmentation fault.
0x00000043 in ?? ()
(gdb) bt
#0  0x00000043 in ?? ()
#1  0x00000036 in ?? ()
#2  0x00000040 in ?? ()
#3  0x00000046 in ?? ()
#4  0x00000009 in ?? ()
#5  0xb7ff6821 in ?? () from /lib/ld-linux.so.2

The application crashed somewhere in glibc on the way to the exit. The next thing I used was valgrind but it didn’t report any invalid memory access so I had to resort to todays tool. It is called mudflap and part of GCC for a long time. Let me show you an example and then discuss how valgrind fails and how mudflap can help.

int main(int argc, char **argv) {
  int data[23];
  data[24] = 0;
  return 0;
}

The above code obviously writes out of the array bounds. But why can’t valgrind detect it? Well we are writing somewhere to the stack and this stack has been properly allocated. valgrind can’t know that &data[24] is not part of the memory to be used by data.

mudflap comes to the rescue here. It can be enabled by using -fmudflap and linking to -lmudflap this will make GCC emit extra code to check all array/pointer accesses. This way GCC will track all allocated objects and verify the access to memory before doing it. For my code I got the following violation.

mudflap violation 1 (check/write): time=1350374148.685656 ptr=0xbfd9617c size=4
pc=0xb75e1c1e location=`si_test.c:97:14 (range_enc_arfcns)'
      /usr/lib/i386-linux-gnu/libmudflap.so.0(__mf_check+0x3e) [0xb75e1c1e]
      ./si_test() [0x8049ab5]
      ./si_test() [0x80496f6]
Nearby object 1: checked region begins 29B after and ends 32B after
mudflap object 0x845eba0: name=`si_test.c:313:6 (main) ws'
I am presented with the filename, line and function that caused the violation, then I also get a backtrace, the kind of violation and on top of that mudflaps informs me which objects are close to the address I allocated. So in this case I was writing to ws outside of the bounds.
OpenBSC/Osmocom continuous integration with Jenkins

OpenBSC/Osmocom continuous integration with Jenkins

This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on continuous integration with Jenkins.

Problem

When making a new release we often ran into the problem that files were missing from the source archive. The common error was that the compilation failed due some missing header files.
The second problem came a bit later. As part of the growth of OpenBSC/Osmocom we took code from OpenBSC and moved it into a library called libosmocore to be used by other applications. In the beginning our API and ABI of this new library was not very stable. One thing that could easily happen is that we updated the API, migrated OpenBSC to use the new API but forgot to update one of the more minor projects, e.g. our TETRA decoder. 

Solution

The solution is quite simple. The GNU Automake buildsystem already provides a solution to the first problem. One simply needs to call make distcheck. This will create a new tarball and then build it. Ideally all developers use make distcheck before pushing a change into our repository but in reality it takes too much to do this and one easily forgets this step.
Luckily CPU time is getting more and more affordable. This means that we can have a system that will run make distcheck after each commit. To address the second part of the problem we can rebuild all users of a specific library, and do this recursively.
The buzzword for this is Continuous Integration and the system of our choice is Jenkins (formerly known as Hudson). Jenkins has the concept of a Job and a Node. A Job can be building a certain project, e.g. building libosmocore. A Node is a physical system with a specific compiler. A Job can instruct Jenkins to monitor our git repositories and then schedule the job to be build.
In our case we have nodes for FreeBSD/AMD64, Debian6.0/i386 and mingw/i386. All our projects are multi-configuration projects. For some of our Jobs we use it to build the software on FreeBSD, Debian and mingw for others only on Debian. Another useful feature is the matrix build. This way one job can build several different configurations, e.g. debug and release.
Jenkins allows us to have dependencies between Jobs and we are using this to rebuild the users of a library after a change, e.g. build libosmo-abis after libosmocore.
The build-status can be reported by EMail, irc but I generally use the RSS feed feature to find out about broken builds. This way I will be made aware of build breakages and can escalate it by talking to the developer that has caused the breakage.
Jenkins of Osmocom

Conclusion

The installation of Jenkins makes sure that the tarballs built with make dist contains everything needed to build the software package and we have no silent build breakages in less active sub-projects. A nice side-effect is that we have less Emails from users due build breakages. Setting up Jenkins is easy and everyone building software should have Jenkins or a similar tool.

Outlook

We could have more build nodes for more Linux distributions and versions. This mainly depends on volunteers donating CPU time and maintaining the Jenkins Node. Jenkins offers a variety of plugins and it appears to be easy to write new plugins. We could have plugins that monitor and plot the binary size of our libraries, check for ABI breakages, etc.
Testing in OpenBSC and Osmocom

Testing in OpenBSC and Osmocom

The OpenBSC and Osmocom project has grown a lot in recent years. It has grown both in people using our code, participating in the development and also in terms of amount of sourcecode. As part of the growth we have more advanced testing and the following blog posts will show what we are doing.

Each post will describe the problems we were facing and how the system deployed is helping us to resolve these issues.

Introducing Poettering scale of software awesomeness and using

Introducing Poettering scale of software awesomeness and using

I would like to quickly introduce the Poettering scale of software awesomeness. The scale appears to be number of features over time, the higher the number the better. Evidence has shown that this scale is the best to compare software and should be used whenever one needs to decide on competing projects.

Now that we learned how easily one can find the better piece of software, let’s apply it. First Emacs vs. vim. Both are more or less the same age, this allows us to find out if vim or Emacs is better solely by looking at the features. As Emacs can send email, has a tons of lisp scripts it is clearly more “Poettering awesome” than vi. The second example is KDE and GNOME, again both are more or less the same age and we only need to compare the amount of features to know which one is better. We do this by looking at the configuration dialogs, KDE has clearly more options than GNOME. So KDE is clearly more “Poettering awesome” than GNOME.
Device profiles in Qt5

Device profiles in Qt5

OpenGL and Devices

The future of Qt’s graphic stack is OpenGL (ES 2.0), but this makes things more complicated in the device space. The library names and low level initialization needed for OpenGL is not standardized. This means that for a given board one needs to link libQtGui to different libraries and one needs to patch the QPA platform plugins to add device specific bits. The GPU vendor might provide DirectFB/eGL integration but one needs to call a special function to bind eGL to DirectFB.

Historic Approach

The historic Qt approach is to keep patches out of tree, custom mkspecs files that need to be copied into Qt before building. I have had two issues with this approach:
  1. Device support should be an essential part of Qt5.
  2. Build testing (and later unit testing) is more complicated.

Device Profile Proposal

The approach we took is a pragmatic one, it should be easy for device manufacturers to do the right thing, it should not be a burden for the maintainability of Qt. After some iterations we ended up with device profile proposal and began to implement to it. Most of it is merged by now.

Key Features

It begins with the ./configure script that now has the -device=DEVICE and -device-option KEY=VALUE to select a device and to pass options, e.g. additional include paths, BSP package, to Qt. The second part is a device to influence the behavior of QPA platform plugins. Right now this applies to the DirectFB and EGLFS plugin. A device can install hooks that are called as part of the initialization of these plugins. The hook is the pragmatic approach to get a mix-in with the existing code base.

Supported Devices

Right now we have completed device support for the RaspberryPi,BCM97425 and AMLogic 8726-M. We do support some more devices but they might still require external patches.
QtMediaHub on MIPS with Qt5 and DirectFb

QtMediaHub on MIPS with Qt5 and DirectFb

Ever since the start of the Qt project I am working on DirectFB in Qt5 (and Qt4) with the remote goal of getting QtMediaHub to run. It started with catching up with the rather nice refactoring of lighthouse in Qt5, fixing memory and resource leaks in Qt5, DirectFB and mostly in the DirectFB lighthouse plugin.

It moved to dealing with broken “make install”, broken QML2 examples and documentation, figuring out how to get patches for QtV8/V8 into the project, adding MIPS code for the Qml mode (a new global object), global compare for MIPS and finally working on OpenGL integration for QML2.

In this specific case I had an OpenGL ES 2.0 library coming from a vendor and created the ‘directfbegl’ plugin to use EGL to go from IDirectFBSurface to an eglSurface. I think in this specific case there is unified 2D and 3D buffer space which should allow a lot of cool stuff.

It mostly works, QML2 has some way to go to work on battery powered devices but it is looking quite nice.