A new lease of life for the Dell XPS 13 9343

Dell innovated in 2015 by shipping a version of its (near borderless) XPS 13 9343 running with the Ubuntu operating system instead of Windows.

As it turns out this product was a bit rushed and/or not properly tested which resulted in multiple issues: random crashes, buggy keyboad/touchpad, audio broken…which is unfortunate because the hardware in itself (well most of it anyway, more on this below ) is pretty good.

Today most of the bugs above can be fixed with a little bit of tinkering:

Update to the latest bios A07. I found the easiest way to apply the upgrade is to use FreeDos, as explained here.

Replace the wifi card: early versions of the XPS 13 used a broadcom wifi chip. It caused multiple crashes and can be swapped for an Intel chip instead. Yes this means opening up the laptop and replacing the chip – it’s pretty trivial. The hardest part is prying open the laptop to start with.

Replace Ubuntu with Arch Linux. Arch Linux comes with a more recent version of the Linux kernel and feels less bloated than Ubuntu. A word of caution though:  version 4.5.1.1 did not recognise the sound card realtek ALC3263. The fix is to modify the kernel and recompile it. Sounds scary but really isn’it – detailed step-by-step information on how to do so are available here.

I applied the three steps above and never had a problem with my laptop since, with everything working as it should.

Setup a local copy of a WordPress site

How to create a local copy of your live wordpress site in 6 easy steps.

Prerequisites:

  •  Linux operating system with the LAMP software bundle already installed, configured and secured.
  • The Apache mod_rewrite module must be enabled.

 

Steps to setup a local copy of your WordPress site

 

√ Step 1.
Export the live database contents.

 

√ Step 2.

Export the live wordpress install.

… your hosting provider will usually assists with above steps.

√ Step 3.

Update the database dump  obtained in step 1) above to replace all references to the <live host> with locahost

sed -i ‘s/livehost.com/localhost/g‘ live_database_dump.sql

√ Step 4.

Import the modified sql file into a mysql database
mysql -u <username> -p <pwd> <database> < live_database_dump.sql

√ Step 5.

Extract the live WordPress files obtained in step 2) under /var/www/html

√ Step 6.

Modify wp-config.php to point to the local database and the local host.

define( ‘DB_NAME’, <local database> );
define( ‘DB_USER’, <db user> );
define( ‘DB_PASSWORD’, <db pwd>);
define( ‘WP_SITEURL’, ‘http://localhost‘ );
define( ‘WP_HOME’, ‘http://localhost‘ );

 

Why BDD is a false good idea

The cornerstone of BDD, short for Behaviour Driven Development, is the idea of a shared language used throughout the team to write and tests business requirements. In particular BDD recommends that tests should be written:

  • … before the production code is written. Which is fine, recommended even.
  • … in a ubiquitous language that is understood by all members of the team. This means expressing tests in text format using natural language constructs. And there lies the problem(s).

What’s wrong with expressing tests using natural language ?

1. An extra layer of code is required to translate between the business-readable tests and the developer-readable production code. Often this is accomplished using a “framework” (shudder) such as JBehave or Cucumber. This translates into more code to write, more dependencies to add into the project and more opportunities for bugs. Plus this slows down debugging as code paths have to be traced through that same translation layer.

2. BDD are best suited for end-to-end tests, i.e tests which involve testing the system end to end, from initial input to expected output. End to end tests tend to be fragile, slow and take time to write. Every effort should be made to restrict end to end testing to only a few select, critical use cases while everything else can be addressed with unit tests. BDD encourage just the opposite.

Suggested ratio between end to end tests and unit tests should look like the pyramid below with few slow, brittle end to end tests and many fast, reliable unit tests.

 

3. Last but not least… nobody looks at the output produced by BDD tests. And especially not product owners and/or testers, so business readable tests will be lost on them. When it comes to verify the behaviour of the system they will instead fire up the application under test, manually reproduce a scenario and visually scan for the result. It’s so much simpler and quicker compared with looking for the BDD tests output and then trying to ascertain how exactly the steps described map to application functionalities.

So in summary. The core tenet of shared artefacts certainly is interesting in theory however in practice it does increase code complexity and turnaround time. One redeeming feature at least is that BDD nudges the team into writing *some* end to end tests (as long as they’re kept under control as the test pyramid above illustrates).

 

 

Testing multithreaded code

Java threads are managed by the underlying operating system scheduler, which is always (afaik) non-deterministic. Therefore threads execution order and timing is impossible to predict and reproduce.  How do you test an program in these conditions ?

 

Example scenario

A service invokes a long-running calculator operation on a separate thread.

 

Testing option1: unit test with timeout

  1. start the service and invoke the calculator logic.
  2. wait for the calculator to execute a callback
  3. check the actual value returned by the callback.

The difficulty here is in step 2 : defining how long the test should wait for the callback to be invoked. If the wait time is too long then the test will take much longer than required to complete. If it’s too small then the test will fail. On top of that the ideal timeout value(supposing it can be found) may change as the application evolves and the calculator  speeds ups (or slows down).

  • pros: comprehensive test of the business logic and threading mechanism.
  • cons: slow and fragile, subject to randomly break the build. Tests which take too long or randomly fail usually ends up being ignored.

 

 

Testing option2 : the humble object

The idea is to separate the hard-to-test threading mechanism from the easy-to-test business logic, i.e acknowledge that a unit test is not the ideal place to test the threading model (usually best done  as an end-to-end or manual test instead), while the business logic is best tested in a single thread.

1. create a single-threaded test to check that the service actually invokes the  calculator

2. create another single-threaded test to check the calculator callbacks the service with the expected calculated value.

  • pros: single-threaded, fast, deterministic tests
  • cons: the threading model is left untested., although this can be covered by end to end tests and code reviews.

 

Which option is best  ? option 2 does not provide a 100% coverage of the code but crucially the tests are fast and robust. On the other hand testing threads and logic altogether is a recipe for slow and fragile tests which will end up being ignored very quickly, bringing the test coverage to zero.

 

Zero allocation patterns in Java

An excessive rate of objects allocation, however fast object creation is in Java, can dramatically impact the overall performance of an application. Too many objects created in too little time will increase pressure on the garbage collector, resulting in more frequent stop the world pauses,  which in turn translate to jitter and/or degraded response time for the end user.

Low latency applications follow two broad strategies to work around this issue:

1- gc tuning: increasing the total heap available to the JVM will reduce the frequency of stop the world pauses (but not their duration). Allocating more threads to parallel gcs, re-sizing the eden/survivor/tenured space may also help. However these settings will become out of date sooner or later when the volume or distribution of the data processed by the application change, and will then need to be re-evaluated.

2- use non allocating patterns: basically strive to reduce the number of objects created to reduce the workload on the garbage collector. For example:

Profile, profile, profile
Identify allocation hotspots in the code using a profiler such as the Eclipse memory analyzer, YourKit, JProfiler… Make fixes to remove the hotspot and repeat as long as necessary.

Use primitives instead of primitive wrappers and objects
Prefer int to Integer, double to Double, char to Character…etc.  Primitives are allocated on the stack and therefore are not garbage collected.

The same reasoning extends to data structures: a SparseArray (a map which uses primitives  for its keys) will be more memory efficient than a HashMap which uses objects for both keys and values.

Also worth mentioning Trove,  a library dedicated to primitives-only collections in Java.

Reconsider your logging strategy
Logging is a source of allocation. Try reduce the logging level and the amount of information being logged. If you must log then consider your logger implementation carefully to pick the lowest allocation logging implementation.

Go off-heap
direct buffers are allocated outside of the heap and hence are not subject to the vagaries of the garbage collector. Better suited for long-lived objects such as the app static data.

Use object pools
It is well established that the concept of immutability leads to better quality code. This is a fundamental tenet of the functional programming paradigm and most functional languages enforce immutability, at least by default.

In certain circumstances this can lead to the creation of an excessive number of objects: e.g listening to market data updates from  multiple external feeds where each feed publishes thousands (if not millions) messages per second. If each incoming message creates an object in memory then that’s a lot of objects for the JVM to keep up with.An alternative is to create a pool of objects which are kept in memory and reused for each incoming feed update.

Now pooling has a bad rap in javaland – and this is deserved to some extent. This technique does lead to more complicated code, is more-error prone, and can actually hurt performance in multi-threaded environments where each resource managed by the pool has to be thread-safe. However there are ways to achieve pooling without elaborate locking schemes, and the end-result is well-worth the additional development time.

Benchmarking String.intern() with JMH

String interning overview

From the Oracle Javadocs:

String.intern() returns a canonical representation for the string object.

In other words interned strings are pooled so that there is one instance of every string (the canonical representation) in memory. This also means interned strings can be  compared using the ‘==’ operator rather than equals() since there’s no possibility of having two identical strings with two different memory addresses (provided all strings are interned).

The downside is that invoking intern() is going to be more taxing in cpu time than a mere string allocation.

The upside is that interning optimises for memory consumption – function of how many dynamically built strings are generated by the application, and how many of these strings are unique.

Microbenchmarking

The cost/benefit of string interning need to be assessed on a case-by-case basis by taking appropriate time measurements.

The classic way to do so is to rely on a stopwatch to calculate the elapsed time before and after the operation being measured. This technique works relatively well for large, macro benchmarks when the operation being measured takes more than a few seconds eg. a database lookup.

Stopwatches however fail to take into account the many tricks used by the jvm to optimize the code at runtime: warmup, inlining, dead code elimination, loop unrolling etc. and this can lead to  biased results when dealing with millis/microseconds measurements.  A preferred option in that case is to use a microbenchmarking framework for Java, such as Caliper or JMH , which will generate benchmark code taking into account the above pitfalls.

Benchmark example with JMH

[gist  https://gist.github.com/eleco/d4096caa751eda96bf8f /]

Results

In the scenario above interning improves performance significantly. The cost of the intern() method is moot when it significantly reduces gc pressure overall.

JMH_interning