2023/01/21

The three sieves of Socrates - can the tests for gossip also filter fake news?



Famously quoted and cited as a story of Socrates challenging a friend, the 'three sieves' or the 'three filters' were intended as a way to decide whether to believe rumours. These three filters sound simple however it's not always that easy to apply them:

  1. The truth: are you sure what you are about to repeat is true?
  2. Is it good: is the saying positive?
  3. Is it necessary: is sharing this information important?
The concept of the three sieves to halt the spread of rumours seems straight forward. It's hard in modern times to be able to say yes to all three of these questsion when we read something, being selective what is shared is so important.

Fake news travels faster

It seems that even today this lesson is something many could learn from. Studies have shown that fake news travels six times faster from a study of information on twitter.

It seems that this historical tale is as important today as it was over two thousand years ago!

2023/01/18

Git version control system cheat sheet - life of a repository




The git distributed version control tool is one of the most popular versioning tools in use in the software industry today. It has a long history going back to 2005, created by Linus Torvalds for development of the Linux kernel. Git supports a distributed peer to peer workflows. Over it's history a number of processes like 'git flow' have been built on top of it and of course the famous github and it's related tools have become pervasive.

In this post I'm going to walk through the most common commands you need for your every day workflows from creation to distributing your changes.

Creating a git repository

Creating a git repository is a very simple. Git tracks all of your file changes by storing data about the state of the files in a hidden directory '.git' at the top level of the repository.

> cd some-directory
> git init
Initialised empty Git repository in some-directory

Cloning an existing repository

In reality you won't usually be initialising a repository from scratch, you will more than likely be 'cloning' a repository from some existing location. This will give you a full copy of the repository. This is achieved with the git clone sub command. Here is an example of how that might look if you were cloning a repository from GitHub:

> git clone https://github.com/SiteMorph/protostore.git

You can confirm that your clone of the repository was successful using the git status sub command.

> git status
On branch master
Your branch is up-to-date with 'origin/master'.

nothing to commit, working tree clean

There are a few things to unpack in the status:

  • Git uses 'branches' to track multiple histories of a repository. There is a convention to use the 'master' as the parent / source for all other branches. Many git based workflows may also use the master branch as the destination for when changes are merged before being deployed.
  • There is also message about being up to date with origin/master. Origin is the remote machine label from where we just fetched the repository state. You can see what these repository locations are by calling git remote -v. Again by convention 'origin' is the default label for the remote location just cloned.
  • Nothing to commit is self explanatory. There are no changes to files being tracked by git.

Adding files to be tracked

After creating a repository with git init no actual files are going to be tracked by git so let's create one and add it to the files git is tracking:

> touch untracked.md
> git status
On branch master
Your branch is up-to-date with 'origin/master'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        untracked.md

nothing added to commit but untracked files present (use "git add" to track)

Adding your file to the set of those tracked by git is trivial with the git add sub command.

> git add untracked.md
> git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   untracked.md

Now we have a repo with a single tracked file which has uncommitted changes.

Committing a change to the log

Now that a change has been added to a file it's time to commit the change using the git commit sub-command. These sub-commands have a lot of options, you will pick up as you learn them. You can find out more about them using the git help commit sub command. Complete the prompts and you will now have a new checkpoint state in the log. Once the commit is in the log you can see the checkpoint commit states by using the git log sub command.

> git log
commit 8824fbd3e6djf8s5b20c893c9fe12d6dafecd647 (HEAD -> master)
Author: Bob Builder <youremail@gmail.com>
Date:   Wed Jan 18 22:13:58 2023 +0000

    Added a new file.

Resetting the state of the repository to some historical state

One of the most common questions about git is how to uncommit a set of changes. Git has a simple sub-command for this git reset however it's often not what you want to do as changes will be lost.

> git reset --hard 8824fbd3e6djf8s5b20c893c9fe12d6dafecd647
HEAD is now at 8824fbd Some commit message

Resetting a repository is a rather destructive activity and is almost the opposite of the point of having a version control system. Rather than resetting hard to a given commit, a more common workflow would be to copy the current branch with local changes. Above the HEAD is tracking the 'master' branch on the local copy of the 'origin' repository. Creating a copy of the branch before resetting it is probably a good idea. This is achieved using the git branch -c <your-new-branch-name>. You can then reset 'master' while keeping your changes on a different branch of the code. You can switch to that branch using the checkout sub command like git checkout <your-new-branch-name> and work there witout disturbing the master branch.

Pushing your changes to the origin repository

After some time you will build up a change log on your local repository and you will want to push it to another location to share or backup your changes. This is simple using the push sub command like git push origin. Note here that the origin remote location label is used again to push the current branch name to the remote host on the same branch name.

This post is intended to give you a flavour of the most basic commands, if you are interested in specific issues please comment below.

2023/01/17

Word of the day: effusive

In order to stir the grey matter, and learn a little along the way, here is the word for the day

Effusive


Adjective. Showing gratitude, pleasure or approval in an unrestrained manner.

... an effusive thank you ...

In geology effusive can also be used to refer to a volcanoe pouring out large volumes of molten rock. The effusive volcanoe shares the trait of gushing with those offering such heartfelt platitudes.

History


The use of the word peaked in the late 19th century, possibly coinciding with the eruption of Krakatoa when 70% of the island collapsed generating a shockwave which traveled around the world three times.

2021/09/19

Identiy, Users and Access: The holey trinity

 

Identity, Users and Access: The holey trinity

For some time I have been meaning to capture a snapshot of some of the key design considerations for the identity and account structure which emerged from quite a few years of requirements spanning Ad-tech, dating and multi account systems built across multiple startups.

The reasoning for sharing this model is in the hope that some of the lessons can be useful for others faced with similar challenges. If nothing else the model, as an emergent real world solution has some quirks not common in many or all public solutions in this space.


Requirements at scale of multiple companies

  • In the ad tech space there is an interesting precedent designed to protect the privacy of users. Publishers and advertisers each see a different 'user id'. This is the ID that is stored in the impression and click logs which drive the entire ad industry. One of the startups explicitly needed support for the concept of scoped anonymised user identity as this was an artefact of the underlying data. We actually scoped this up to a higher level requirement for all systems and it worked nicely to avoid ever exposing the internal ID of a user.

     
  • Separation of personal, an identity and user. This allows personally identifiable information to be separated from a 'user'. The concept of a user is very overloaded and is usually an account. In most real world systems you see there is a need to be able to have multiple people log in to the same account. This necessitates separation of the identity from the user[name].
  • Users will probably want to log in with lots of different IDP and other sources. It's worth noting too that it's helpful to be able to have multiple grants stored for the same user.
  • It's the identity, not the person or the username to which logic needs to be applied.
  • At the time of writing our access system used explicit scoped grants to gift access. These scopes were effectively paths of entities. The paths for a user access were consistently named also with our tasks and other frameworks. It would have been nice to revisit this and re-scope access to identity with hindsight.
  • Person Name was of particular interest as it's interesting to be able to store alias easily. Notably most systems aren't built with this up front. It is a very common use case however in production systems.

 

Takeaways 

The model evolved over time and as such went through a number of iterations. There are a few quirks that warrant further attention. User access was one such quirt, arguably it should be associated with the identity rather than the user concept. It ended up living in user because at the time of creation, identity and access modules were in separate packages. Access should have been separated into another module.

Looking back another thing that seemed odd is that we didn't have even a group concept. The reason for this is that the 'thing' would often be a group. The access did have a collection of types of grant which were originally based on unix style read / write / x bits. These concepts expanded over time to contain about 7 different types of permission however they didn't explode out of control.

Startup ENG Rules Series. 1. Storage. Prefer Insertions to Updates

box doesn't fit in circles

 

 

 

 

 

 

 

Background


 

 

 

 

 

 

 
 
 
 
 
 
 
 
 
Over the last decade I have worked at a number of very large and very small companies, the smallest being just two people and the largest having over one hundred thousand people. On day one a startup faces many challenges however the most important one is usually survival. The first couple of months are critical while going from 'zero to one' as the saying goes. The top priority has to be to secure customers and launch an initial offering, these days often referred to as an Minimum Viable Product. In this mode engineers need to build just enough to make the product work. Like Occams Razor all non essential work should likely be avoided and there is little room for high principles in the search for results.

One things that changes as a startup moves into iterating rather than prototyping is how to survive the first big customer. The focus of the team often has to shift from optimizing for a viable solution to consider aspects of stability, reliability and correctness. In some sectors these are more or less important compared to rapidly reaching a position where feedback can be sought to verify product hypothesis.

In this series of posts I am going to share a collection of 'rules' which emerged through a number of projects including the rationale, signals and counter cases.

Note: these rules are based on experience from startups and may not reflect common practices in larger companies. These insights are shared purely based on experiences from:

  • SiteMorph: A SEO / SEM marking tool for SMB.
  • ClickDateLove.com (Muster): A dating site employing basic ML approaches to create better profiles.
  • Shomei / Futreshare: Ad attribution heuristic modelling for advertisers with billions of ad impressions.
  • Upgrade Digital: hospitality booking platform build for developers with one of the fastest build times for web developers available in the world at the time.

 The objective of these rules was to have standard solutions to everyday questions based on real world lessons. Having de-facto solutions to everyday problems meant that development could go faster. Going faster for a startup means less cost, faster iteration and more feedback. Some of the rules may seem to contradict this when they add overhead. The point here is that the solutions were born out of necessity. This necessity drove iteration to a viable solution.

Rule 1. Storage, prefer insertions to updates

 Advice

When a data attribute of an entity may be written or updated by a number of writers, prefer refactoring that at attribute into a separate concept and inserting in into a different store rather than updating a field on an existing entity.

  •  Payment authorization code for a payment
  • Approval for a change where multiple people can approve
  • Any transaction sensitive attribute which could be the source of a race condition.

Example

Consider a hotel booking for a single stay this can be expressed in normal form along the lines of:

  • Hotel Booking
    • user : who books the stay
    • checkin : date of arrival
    • rooms : ... details of the required rooms
    • total cost: sum of all room night rates and fees.
    • payment request: payment transaction token used to initiate the transaction.
    • payment confirmation: payment completion token from transaction processor.
    • payment cancellation: the cancellation token passed by the transaction processor.

This seems pretty reasonable and has all of the fields associated with the booking however without good locking of the entity type multiple actors are able to update the fields leading to a lost update race condition. Locking isn't such a bad thing you may argue however underlying locking semantics typically lead to centralization as decentralized consistency isn't offered by many storage engines and CAP theorem comes into play. Rather than update attributes of the existing booking, one typically safe solution well aligned with eventual consistency offered by many storage engines is to always insert. To achieve this the entities need to be separated like so:

  • Hotel booking
    • user
    • checkin
    • rooms
    • total cost
  • Payment request
    • hotel booking reference
    • payment request token
  • Payment confirmation
    • payment request reference
    • payment confirmation token
  • Payment cancellation
    • payment confirmation reference
    • payment cancellation token

Reasoning

  1. Avoiding locks helps us to scale better. Many storage engines only support table level locks which can be a significant issue in online transaction processing systems. One payment provider I have worked with had median API response times in the 1000+ms range. Even the best available are often still in the 200ms range. Effectively this means if you hold a lock to update your booking, or payment table, you can only process ~5 transactions per second. Always inserting typically has O(1) performance semantics and is typically only limited in performance by disk / network speed.
  2. Avoiding lock release starvation is a significant gain. In the world of scaled data centres it's only a matter of time before one of your service is going to crash during a transaction. The law of large numbers says that as you have more services you are likely to start to observe more instance crashes. With a 2x9s 99.5% you still have 12960 seconds every month of downtime per instance to contend with. Even using advanced monitoring you can't avoid some crashes at scale. Given that it's essential to plan for them. When a process crashes, most distributing locking solutions will have to wait for automated timeout of the lock. Eliding this problem by locking is a significant win in degraded situations.
  3. Minimise the window for issues. Recovery is always required but writing updates with O(1) insert semantics dramatically narrows the window for lost writes. For our storage system at the time we were seeing insertion times in nanoseconds for first disk flush. At this point we only saw one crash during insert per year. We build a recovery task for that too. Keep an eye out for the future rule on self correction.
  4. Minimise your entity storage reliance on technology specific sophisticated locking e.g. relational database locks.

Context

  • For upgrade digital one of our key value propositions is that our platform included correction of booking state and payment processing providers. One of the hotel chains we worked with regularly had rooms without payment and payment without rooms!
    • Some payment providers used had delays in correction of up to 24 hours in production so we had to recover elegantly. This might lead to retrying a transaction that had previously timed out only to see it later succeeded so we needed to keep all request initialisation vectors.
    • Hotel room booking systems often allow manual overrides for room allocations as well as overbooking as a standard practice. This could mean that the actual product wasn't available for extended periods of time.
  • For general payment systems it's good to practice to expect delays in callbacks and generally avoid overwriting fields as race conditions and replay are regular occurences.
  • For hospitality the Upgrade Digital platform provided a consistent RESTful API across multiple Micros Opera versions and a number of payment processors. Our approach to play / replay / check async task execution automatically repaired numerous issues on either side of the platform automatically meaning we could sleep at night. For a small oncall team supporting bookings across 120 countries this is a must!

Counter cases

Despite the general practice of always inserting there are notable counter cases where we did use basic locking functionality with a 'test and set' semantic:

  • In our task scheduling the library used a task claim, compatible with AWS SQS to claim async work. This claim required a test and set style storage engine which was easy to achieve with SQL and some no-sql storage engines like Dynamo.
  • Critical sections of code where exactly once semantics are required.


2018/06/30

Top tips for refreshing Android skills with the Little Miracle prototype

Over the last few weeks I have been working on a prototype of a pregnancy contraction tracking app and wanted to share some tips from my experience so far.

Go native with Android

Android has a number of native solutions for problems you may experience during your development. One really good example of this is using java Threads vs TimerTask vs Handlers for updating the UI based on clock or timing events. In this case I was using the timer to schedule asynchronous events to update the user interface which is an anti pattern. When updating the user interface the best practice is to use Handler solutions. If you are seeing leaked context warnings or experiencing issues with thread scopes then there is probably a better way.

Compat[ibility] is main route

The compatibility libraries are not so much extras as the normal way to build things. When prototyping they can help hide a lot of the complexity of the ever evolving Android ecosystem. Unless you are developing for a specific device then the compatibility libraries are a must!

2018/03/29

Changing teams at Google

After spending a little over a year working with Ads partners I am now moving to a different team working with Google partners! Part of the transition involves getting up to sped with Android development so I will be building a demo app over the next few weeks and wanted to share my experiences.

To make things more interesting I will be starting out with an older version of Android (Nougat) and migrating my app to new versions. At the same time I will migrate the app from Java to Kotlin.

In order to make things as real as possible I will be creating an app in a very competitive space (pregnancy tracking) and launching it on the app store. As I go I will share useful resources and guides that I tried to help me.

Hello World

To get started I will be taking a couple of courses to refresh my knowledge as it has been a year since I built an Android app and I am guessing a lot has changed. Here's where I started:

2017/02/15

Redeploying Rediscover.Work to Save Money Part 2


Migrating to Google App Engine was more straightforward than trying to rebuild my environment and meant I had to change a few things in my setup.

- Unable to update app: Class file is Java 8 but max supported is Java 7: Don't forget Guava, only versions of guava up to 20 are supporting Java 1.7
- The app config version isn't the SDK version, it is like the AWS build version and can't contain '.' periods so version 0 it is.
- Downloading the database is a pain in a server-less environment as AWS only supports binary snapshots. But fear not, a bit of ssh / yum / security group manipulation and the download is done in about 20 minutes.
- Deploy a new SQL instance in the cloud console: Note you need to include the create database or use the advanced options to specify the database to import your SQL to.
- Then hit a brick wall "You can't have any JDBC database with Google App Engine." due to "java.lang.management.ManagementFactory is a restricted class". This is a slight misnomer, it is creating threadpool connection pool resources which is prohibited.
- In steps a non threaded connection pool: http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Plain_Ol'_Java
- Unfortunately Tomcat connection pool was a red herring with "java.lang.RuntimePermission" "modifyThreadGroup"
- Logging to a file doesn't work, (neither does looking at the file system to figure out where you are).
- More security errors: setProperty.networkaddress.cache.ttl
- Moved to Google SQL driver "com.mysql.jdbc.GoogleDriver"
- ERROR: (gcloud.beta.sql.connect) HTTPError 400: Invalid value for: Invalid format: "2017-02-16T20:01:08+00:00" is malformed at "+00:00"

2017/02/12

Redeploying Rediscover.Work to Save Money Part 1

After five years working in startups, and more recently in 'stealth mode' startups I have decided to go back to my favourite place to work. This means that I can now share some of my hacking projects more openly.

At the moment one of my side projects which I want to keep alive is Rediscover.Work. It is currently hosted in AWS using Elastic Beanstalk and RDS. This made sense at the time when I was trying to get the site up quickly. Now that the project is no longer funded though it is time to do a bit of belt tightening.

As with any project it is worth looking at different options for how it could be deployed. There were never more than there are now:

* AWS Elastic Beanstalk + RDS. Current running cost of around $96 per month. + Easy install. -  Cost.
* AWS Lambda + RDS. + Simple scaling. - Migration costs are prohibitive.
* EC2 Instance: $9.52 according to the price calculator. + Already have needed tools - More admin to manage instance personally.
* Google App Engine and Cloud SQL: $8.01 according to the price calculator. + Ease of mantenance and app engine security scanning. - Migration to app engine layout.

As I already have all the tools needed to create the EC2 Instance solution and it requires the minimum investment in tooling I chose to go down that route first and see how things played out. In part 2 of this blog I will see how hard it is to migrate to app engine as this seems like a better solution for the long term.

Workflow:

  1. Build dependencies and update dependency versions using

mvn versions:display-dependency-updates

  1. Package the code into a WAR to be deployed on the new shared EC2 instance.
  2. Launch a new instance which will run the site and database.
  3. Start hacking ansible (and loose the will to live due to so many issues with config).
  4. Stop.
Look out for part 2 where I will probably decide to migrate the whole project as managing machines is getting too painful for this time poor old coder.

2017/02/11

Wine Tasting at Majestic Camden

I recently attended my first wine tasting which was organised by my private Somelier. We  Definition wines from Majestic which it selects as reference examples of different wines. We also learned how to pair food, and my somelier won the competition to match food and wines. What great taste!

We tried:

Sauvingnon Blanc grown on Granite. Marlbrough NZ heavy yield fast ripening. Acidic. Some regions are full bodied like Chili. This is light, fresh and green, almost tastes like nettles.

Sauvingnon Blanc. San Cere. Loire. FR. darker and heavier flavored due to the soil. Bolder than the first sauvingnon Blanc due to the chalk soil. Easy drinking and balanced with less acidity.

Chardonnay. Chablis, Eon, France. Un-oaken. Stored in stainless steel temporarily before bottling. The oak barrelling would normally reduce the acidity. Light but zesty with more body than the sauvingnon Blancs.

Definition prosecco. Very light and medium dry, excellent with melon and prosciutto.

Pinot Noir NZ. Picking a great one can be tricky from a French range. This one is quite spicy and 
peppery. Some people say it is like grape juice or raisins. I like it.

Malbec Argentina. (Mendoza). Very full bodied. Smoky blackberry. Great with a steak frites.

Tempranillo Rioja ES. A blend of different vineyards. The grand reserve we tried was the older with around 36 months of oak aging. This was the last and heaviest, DOCG.
I'm looking forward to the next level of training!

2017/02/08

Living with Android 4.* on Samsung S3

The Due to a screen fade issue with my Samsung S7 I recently had the chance to downgrade temporarily to my old Samsung S3.

The first thing that I noticed was how much smaller the old device is, both in terms of the physical device and screen real estate. Welcome to zooming and scrolling! From a development point of view it is all too easy to think of these two devices as the same format but that could be a mistake. It may be good to consider making the main view different for small handsets.
The second thing to hit me which was more profound was the performance, or lack of. I removed every app I didn't absolutely need. It turns out that still wasn't enough. If you are wondering why your older remain relatives or friends are so slow to message back; this is probably why. It is so tempting, as a developer using the latest handset, to think people will upgrade their handsets. Guess again, according to the official Google developer pages a massive 36.3% of android handsets are running KitKat L19 or lower as of January 2017. If you are building a mass market app this simply isn't a demographic that you can afford to ignore.

There are some tech companies who occasionally do downgrade testing for a large set of their employees. Facebook is one noteworthy example.

Many apps just can't run in usable time, by that I mean that they remind me of using a x286 without the Coprocessor. A particularly bad example was the twitter app which crashed every time that the "share content on social" hook was called to find contextual information.
The question remains, does developing for the latest hardware first make sense if you are trying to reach mass appeal or does making your developers start with the end in mind ensure your app will succeed?

2016/09/23

Discovering The Beauty of Tuscany in Italy

Italy is without exception my favourite place to visit and my last trip was no exception. There are some really stunning places but my favourite from my last trip was Montepulchano.








The town it's self is small and can easily be walked around in a matter of hours but to really get to know this gem of a spot you have to spend a little more time getting to know your way around.

One definite attraction are the wine cellars of this great old wine growing region.


As well as the beauty of this gorgeous town our accommodation at Residence Fabroni really made the visit special. Ombretta, the manager of the property was probably the most hospitable host you could ask for. The view from the balcony was worth the trip alone.


As well as the Tuscany wine region the trip also included Florence and Verona, the town famous for it's part in Romeo and Juliet. The restaurants definitely seemed to favour eating in couples!








More photos coming soon when I manage to find a USB card reader!

2016/05/31

QA for the Customer

Some time ago I had the chance to work wuth a development team which had separate QA resource. The team was working in the standard "over the wall" mentality. Developers do stuff, QA engineers pick it apart and so on.

I heard a phrase which really gave me pause:

"QA is working on behalf of the customer"

This sounds great, the QA engineers really take ownership of thinking about the customer. What does it say about the development team motives? Aren't they thinking about the customer?

In the teams I have helped shape I have always avoided dedicated QA because I suspect it creates an environment where people defer responsibility. On the other hand it might mean that a stretched development team can roll out faster as they can rely on QA to catch the issues. That might sound plausible but in the world of ever accelerating deployment can manual testing really keep up?

2016/04/19

Recruiting Full Stack Developer

I recently joined www.parktechnology.com as the CTO.

One of the great pleasures of my role is to start hiring a technology team. The first role we are trying to recruit for is a 'Full Stack Developer'. We will also be hiring for Android and IOS developers too so watch this space.

UPDATE...

We were really lucky to be able to find two really strong candidates who have a wealth of experience. Exciting times!

Blog Draft Ratio: Should We Publish Everything

Recently I have had a proliferation of ideas about blog posts to write. The challenge is knowing which are really interesting to readers. One thought was to publish every draft as it is then let views / votes / comments decide which to refine. Does anyone want this?

2016/03/13

Random Damage

One of the hardest things to rationalise are random acts. Today I found someone had pushed my motorbike over. Unfortunately there was quite a bit of damage. I still need to strip down the rear fairing which is cracked,  but the damage already stands at over £350.

With insurance premiums spiralling and excess now nearly half the value of the bike,  maybe it is time to let it go. You just can't keep anything nice.

2016/01/19

How to Apply Sales CRM to Job Hunting

After reading a number of books on sales about using technology like Customer Relationship Management (CRM) tools to measure and qualify your sales funnel to increase the predictability of leads and opportunities turning into offers and contracts.

By leads I'm talking about people or companies I haven't contacted or worked with.  Once they are qualified then they become opportunities. This seems to be a common distinction made in the sales industry. The terminology seems better suited to sales and marketing but for the sake of speed I'm going to use them here.

Many sales organisations also split the tasks of generating leads and developing them into opportunities. My lead generation activities included:


  • Attending conferences and fairs.
  • Researching top companies via lists from sites like glassdoor and the guardian top graduate employers lists.
  • Through my network trying to identify potential companies.
From these lists of companies the qualification step is quite different from an outbound sales team approach. In a sales team the lead generation hand over happens after the lead generation team has made contact and qualified that the potential customer is in the market to buy. The opportunity team then focuses on 'closing' the deal.

In the job hunting context this looked like researching the companies and identifying whether they are hiring for a good work match. To stretch the metaphor further the HR, talent team or recruiters are the 'gate keepers'. Initial calls would be with HR for example and once it seemed there was a potential there and the discussion was involving the leadership team of the company they were converted to opportunities.

In keeping with the predictable revenue approach I tried to keep track of how many companies I found were in market. For me it was around 20%. This helped me figure out how many companies I needed to research to feed my opportunity funnel.

Using my CRM tool I set up a opportunity funnel with stages for:
  1. Introduction Email.
  2. Initial Call. 
  3. Send CV.
  4. On Site Interview(s).
  5. Proposal Meeting.
  6. Offer Review.
  7. Accepted.
This was really helpful as I could prioritise based on likelihood of getting an offer and how far along the process I was. Being able to track the attrition also gave me an idea how many company lead generation / qualifications I needed to do in order to 'fill' my week. The goal being to have 5 - 10 on site interviews per week.

2016/01/15

Fix Found for Foundation 5 Syntax error unrecognized expression

If you are using Zurb Foundation 5 and are seeing this error message:

Uncaught Error: Syntax error, unrecognized expression: [data-'Times New Roman'-dropdown]


Then check out this solution from http://jelaniharris.com/2014/fixing-foundation-5s-unrecognized-expression-syntax-error/

2016/01/14

Installation Issues of Ubuntu 15.10 and 14.04 on Lenovo X1 Carbon

I recently bought a Lenovo X1 Carbon and had a bit of a 'mare' installing Ubuntu (both 15.10 and 1.04). I had error messages like:


  • Unable To Find A Medium Containing A Live File System
  • ACPI PPC Probe failed 

And some other issues which aren't possibly related. The bottom line is that it wasn't possible to install ubuntu from USB disk as the live CD medium wasn't found. 

There is an open bug with usb3 disks and I found that if I forced my laptop to treat the ports as USB2 (disable USB3 in the bios) everything worked fine. If you are having this issue and this post helps feel free to tweet me your dmesg issues so I can add them to the list!

2015/12/31

Which is the next hot technology for full stack developers?

I am trying to compile a list of technology stacks that full stack developers are using and could in the future become part of a recognised approach like LAMP. Please tweet me your suggestions...


  • React / Go / Mongo : MGR 
  • React / Node / Mongo: RNM
  • Angular / Ruby / Mongo : MRA
  • Ruby / Rails / Postgres : RRP
How about layout technologies? The main runners seem to be:

  • Foundation
  • Bootstrap
Then there is view / control systems like:
  • Backbone
  • Angular
  • Ember