Identiy, Users and Access: The holey trinity


Identity, Users and Access: The holey trinity

For some time I have been meaning to capture a snapshot of some of the key design considerations for the identity and account structure which emerged from quite a few years of requirements spanning Ad-tech, dating and multi account systems built across multiple startups.

The reasoning for sharing this model is in the hope that some of the lessons can be useful for others faced with similar challenges. If nothing else the model, as an emergent real world solution has some quirks not common in many or all public solutions in this space.

Requirements at scale of multiple companies

  • In the ad tech space there is an interesting precedent designed to protect the privacy of users. Publishers and advertisers each see a different 'user id'. This is the ID that is stored in the impression and click logs which drive the entire ad industry. One of the startups explicitly needed support for the concept of scoped anonymised user identity as this was an artefact of the underlying data. We actually scoped this up to a higher level requirement for all systems and it worked nicely to avoid ever exposing the internal ID of a user.

  • Separation of personal, an identity and user. This allows personally identifiable information to be separated from a 'user'. The concept of a user is very overloaded and is usually an account. In most real world systems you see there is a need to be able to have multiple people log in to the same account. This necessitates separation of the identity from the user[name].
  • Users will probably want to log in with lots of different IDP and other sources. It's worth noting too that it's helpful to be able to have multiple grants stored for the same user.
  • It's the identity, not the person or the username to which logic needs to be applied.
  • At the time of writing our access system used explicit scoped grants to gift access. These scopes were effectively paths of entities. The paths for a user access were consistently named also with our tasks and other frameworks. It would have been nice to revisit this and re-scope access to identity with hindsight.
  • Person Name was of particular interest as it's interesting to be able to store alias easily. Notably most systems aren't built with this up front. It is a very common use case however in production systems.



The model evolved over time and as such went through a number of iterations. There are a few quirks that warrant further attention. User access was one such quirt, arguably it should be associated with the identity rather than the user concept. It ended up living in user because at the time of creation, identity and access modules were in separate packages. Access should have been separated into another module.

Looking back another thing that seemed odd is that we didn't have even a group concept. The reason for this is that the 'thing' would often be a group. The access did have a collection of types of grant which were originally based on unix style read / write / x bits. These concepts expanded over time to contain about 7 different types of permission however they didn't explode out of control.

Startup ENG Rules Series. 1. Storage. Prefer Insertions to Updates

box doesn't fit in circles















Over the last decade I have worked at a number of very large and very small companies, the smallest being just two people and the largest having over one hundred thousand people. On day one a startup faces many challenges however the most important one is usually survival. The first couple of months are critical while going from 'zero to one' as the saying goes. The top priority has to be to secure customers and launch an initial offering, these days often referred to as an Minimum Viable Product. In this mode engineers need to build just enough to make the product work. Like Occams Razor all non essential work should likely be avoided and there is little room for high principles in the search for results.

One things that changes as a startup moves into iterating rather than prototyping is how to survive the first big customer. The focus of the team often has to shift from optimizing for a viable solution to consider aspects of stability, reliability and correctness. In some sectors these are more or less important compared to rapidly reaching a position where feedback can be sought to verify product hypothesis.

In this series of posts I am going to share a collection of 'rules' which emerged through a number of projects including the rationale, signals and counter cases.

Note: these rules are based on experience from startups and may not reflect common practices in larger companies. These insights are shared purely based on experiences from:

  • SiteMorph: A SEO / SEM marking tool for SMB.
  • ClickDateLove.com (Muster): A dating site employing basic ML approaches to create better profiles.
  • Shomei / Futreshare: Ad attribution heuristic modelling for advertisers with billions of ad impressions.
  • Upgrade Digital: hospitality booking platform build for developers with one of the fastest build times for web developers available in the world at the time.

 The objective of these rules was to have standard solutions to everyday questions based on real world lessons. Having de-facto solutions to everyday problems meant that development could go faster. Going faster for a startup means less cost, faster iteration and more feedback. Some of the rules may seem to contradict this when they add overhead. The point here is that the solutions were born out of necessity. This necessity drove iteration to a viable solution.

Rule 1. Storage, prefer insertions to updates


When a data attribute of an entity may be written or updated by a number of writers, prefer refactoring that at attribute into a separate concept and inserting in into a different store rather than updating a field on an existing entity.

  •  Payment authorization code for a payment
  • Approval for a change where multiple people can approve
  • Any transaction sensitive attribute which could be the source of a race condition.


Consider a hotel booking for a single stay this can be expressed in normal form along the lines of:

  • Hotel Booking
    • user : who books the stay
    • checkin : date of arrival
    • rooms : ... details of the required rooms
    • total cost: sum of all room night rates and fees.
    • payment request: payment transaction token used to initiate the transaction.
    • payment confirmation: payment completion token from transaction processor.
    • payment cancellation: the cancellation token passed by the transaction processor.

This seems pretty reasonable and has all of the fields associated with the booking however without good locking of the entity type multiple actors are able to update the fields leading to a lost update race condition. Locking isn't such a bad thing you may argue however underlying locking semantics typically lead to centralization as decentralized consistency isn't offered by many storage engines and CAP theorem comes into play. Rather than update attributes of the existing booking, one typically safe solution well aligned with eventual consistency offered by many storage engines is to always insert. To achieve this the entities need to be separated like so:

  • Hotel booking
    • user
    • checkin
    • rooms
    • total cost
  • Payment request
    • hotel booking reference
    • payment request token
  • Payment confirmation
    • payment request reference
    • payment confirmation token
  • Payment cancellation
    • payment confirmation reference
    • payment cancellation token


  1. Avoiding locks helps us to scale better. Many storage engines only support table level locks which can be a significant issue in online transaction processing systems. One payment provider I have worked with had median API response times in the 1000+ms range. Even the best available are often still in the 200ms range. Effectively this means if you hold a lock to update your booking, or payment table, you can only process ~5 transactions per second. Always inserting typically has O(1) performance semantics and is typically only limited in performance by disk / network speed.
  2. Avoiding lock release starvation is a significant gain. In the world of scaled data centres it's only a matter of time before one of your service is going to crash during a transaction. The law of large numbers says that as you have more services you are likely to start to observe more instance crashes. With a 2x9s 99.5% you still have 12960 seconds every month of downtime per instance to contend with. Even using advanced monitoring you can't avoid some crashes at scale. Given that it's essential to plan for them. When a process crashes, most distributing locking solutions will have to wait for automated timeout of the lock. Eliding this problem by locking is a significant win in degraded situations.
  3. Minimise the window for issues. Recovery is always required but writing updates with O(1) insert semantics dramatically narrows the window for lost writes. For our storage system at the time we were seeing insertion times in nanoseconds for first disk flush. At this point we only saw one crash during insert per year. We build a recovery task for that too. Keep an eye out for the future rule on self correction.
  4. Minimise your entity storage reliance on technology specific sophisticated locking e.g. relational database locks.


  • For upgrade digital one of our key value propositions is that our platform included correction of booking state and payment processing providers. One of the hotel chains we worked with regularly had rooms without payment and payment without rooms!
    • Some payment providers used had delays in correction of up to 24 hours in production so we had to recover elegantly. This might lead to retrying a transaction that had previously timed out only to see it later succeeded so we needed to keep all request initialisation vectors.
    • Hotel room booking systems often allow manual overrides for room allocations as well as overbooking as a standard practice. This could mean that the actual product wasn't available for extended periods of time.
  • For general payment systems it's good to practice to expect delays in callbacks and generally avoid overwriting fields as race conditions and replay are regular occurences.
  • For hospitality the Upgrade Digital platform provided a consistent RESTful API across multiple Micros Opera versions and a number of payment processors. Our approach to play / replay / check async task execution automatically repaired numerous issues on either side of the platform automatically meaning we could sleep at night. For a small oncall team supporting bookings across 120 countries this is a must!

Counter cases

Despite the general practice of always inserting there are notable counter cases where we did use basic locking functionality with a 'test and set' semantic:

  • In our task scheduling the library used a task claim, compatible with AWS SQS to claim async work. This claim required a test and set style storage engine which was easy to achieve with SQL and some no-sql storage engines like Dynamo.
  • Critical sections of code where exactly once semantics are required.


Always Friday Takeoff - Starting Full Time as CEO

Today my partner is taking the huge and exciting step switching to working on her own company full time as the CEO of Always Friday which has been helping small businesses grow for years now and has grown its self into a successful company. Always Friday helps companies with business development using digital marketing like paid search marketing and conversion optimisation to generate leads.
Always Friday offers a full suite of services to help companies grow by delivering leads and sales. Sasha has already proven herself as a highly effective partner in her customers success and I'm very excited to see her take this next big step towards her successful career running her own company.

Please join me in congratulating Sasha for her bravery and determination that I am sure will ensure her and her customer's success. I couldn't be more proud!


Google Cloud Platform (GCP) Content Delivery Network (CDN) Cache Control Script and Compression

Google Cloud Data Center If you are, like me, building websites using Google App Engine, you may be interested in using Google Cloud Platform Content Delivery Network to host static files to improve your website performance. There are a number of steps required to configure CDN using a storage bucket for your assets. Below I will walk through the required steps to create a storage bucket backed CDN solution which includes a load-balancer which is required for SSL support.

SSL Configuration

Unlike the automatically generated SSL certificates supported by App Engine CDN / Load Balancing requires you to provide your own SSL certificate. If you are using a provider like NameCheap then the first step is to create a key and certificate signing request. Using the Certificate Signing Request you can request a certificate from your SSL provider of choice. After the usual verification of email address you will typically end up with a certificate file. This can then be used by the create certificate resource tool.

1. Create your certificate

openssl genrsa -out my-private-key.key 2048

2. Create your certificate signing request

openssl req -new -key my-priavate-key.key -out certificate-signing-request.csr
Once you have your certificate signing request you will paste it into your SSL provider's site to start the signing process. Once your respective domain validation is done you will be sent your certificate. Save it into my-certificate.cert to continue below.

3. Upload your certificate

gcloud compute ssl-certificates create certificate-name --certificate my-certificate.cert --private-key my-private-key.key

Create your CDN resource

Once you have your SSL certificate imported using the gcloud tool you can go through the setup to create your CDN resource. There are a few configuration options to take into consideration
  • When you create a new origin you will need to create a new load balancer for it.
  • Backend Confguration: create a new storage bucket. Make sure it is CDN enabled.
  • Host path rules: You should configure your custom host and path rules to your host like cdn.yourdomain.com and path /* to match all content.
  • Frontend configuration: You will pick the HTTPS protocol, network service tier premium, create a static IP, select the certificate you uploaded above.
Make a note of the IP address of your load balancer once it is created or refer to the address list in your account.

Enabling Gzip Compression during upload

To enable GZIP compression for CSS and Javascript text files to boost the performance for assets served from a storage bucket it is necessary to upload the content in a GZIP format. The CDN doesn't support dynamically compressing content based on the browser request. The gsutil cp -Z flag enables automatic compression of files as they are copied to your bucket. When the file is copied it is stored in GZIP format and if the browser request includes accept encoding for GZIP then the bucket will serve the compressed content. This can also save you some storage costs. If the browser doesn't specify support for compression then the bucket will transcode the content back to plain text before sending the response.

Enabling cache control

The cache control best practices suggests to update the cache control for your assets.

Whether you are using the user interface or the API to upload your content you may find that you end up with a large number of files where you need to update the cache control. Doing this via the user interface can become tiresome for more than a couple of files.

If you are looking for a quick hack script that will update the cache control settings for every file in your bucket (* not intended for very large buckets) this script could be for you:

#!/usr/bin/bash BUCKET=<your bucket e.g. gs://some-bucket-name> files=$(gsutil ls -r $BUCKET); for i in $files; do if [[ "$i" == *: ]] || [[ "$i" == */ ]]; then echo "Skipping directory $i"; else   echo "Updating cache control for $i"; gsutil setmeta -h "cache-control:public, max-age=604800" $i; fi; done;
Note that I am setting the cache maximum age to 7 days in seconds. This will significantly improve cache performance of static content and can make use of edge caching.

Set up your ANAME record

For custom domain names you want to serve your content from which you configured above in the CDN resource setup you will now need to create ANAME records with your DNS provider. This step is really dependent upon your provider. Once your DNS has propagated you can now start to reconfigure your site to use the freshly created SSL resources.

Update: 2019 

If you aren't planning to use the default AppEngine SSL issuer then you should follow the custom SSL for App Engine guide. Note that you have to concatenate your certificate as:
cat my-private-key.cert ca-bundle.cert >> combined.cert


Top tips for refreshing Android skills with the Little Miracle prototype

Over the last few weeks I have been working on a prototype of a pregnancy contraction tracking app and wanted to share some tips from my experience so far.

Go native with Android

Android has a number of native solutions for problems you may experience during your development. One really good example of this is using java Threads vs TimerTask vs Handlers for updating the UI based on clock or timing events. In this case I was using the timer to schedule asynchronous events to update the user interface which is an anti pattern. When updating the user interface the best practice is to use Handler solutions. If you are seeing leaked context warnings or experiencing issues with thread scopes then there is probably a better way.

Compat[ibility] is main route

The compatibility libraries are not so much extras as the normal way to build things. When prototyping they can help hide a lot of the complexity of the ever evolving Android ecosystem. Unless you are developing for a specific device then the compatibility libraries are a must!


Changing teams at Google

After spending a little over a year working with Ads partners I am now moving to a different team working with Google partners! Part of the transition involves getting up to sped with Android development so I will be building a demo app over the next few weeks and wanted to share my experiences.

To make things more interesting I will be starting out with an older version of Android (Nougat) and migrating my app to new versions. At the same time I will migrate the app from Java to Kotlin.

In order to make things as real as possible I will be creating an app in a very competitive space (pregnancy tracking) and launching it on the app store. As I go I will share useful resources and guides that I tried to help me.

Hello World

To get started I will be taking a couple of courses to refresh my knowledge as it has been a year since I built an Android app and I am guessing a lot has changed. Here's where I started:


Redeploying Rediscover.Work to Save Money Part 2

Migrating to Google App Engine was more straightforward than trying to rebuild my environment and meant I had to change a few things in my setup.

- Unable to update app: Class file is Java 8 but max supported is Java 7: Don't forget Guava, only versions of guava up to 20 are supporting Java 1.7
- The app config version isn't the SDK version, it is like the AWS build version and can't contain '.' periods so version 0 it is.
- Downloading the database is a pain in a server-less environment as AWS only supports binary snapshots. But fear not, a bit of ssh / yum / security group manipulation and the download is done in about 20 minutes.
- Deploy a new SQL instance in the cloud console: Note you need to include the create database or use the advanced options to specify the database to import your SQL to.
- Then hit a brick wall "You can't have any JDBC database with Google App Engine." due to "java.lang.management.ManagementFactory is a restricted class". This is a slight misnomer, it is creating threadpool connection pool resources which is prohibited.
- In steps a non threaded connection pool: http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Plain_Ol'_Java
- Unfortunately Tomcat connection pool was a red herring with "java.lang.RuntimePermission" "modifyThreadGroup"
- Logging to a file doesn't work, (neither does looking at the file system to figure out where you are).
- More security errors: setProperty.networkaddress.cache.ttl
- Moved to Google SQL driver "com.mysql.jdbc.GoogleDriver"
- ERROR: (gcloud.beta.sql.connect) HTTPError 400: Invalid value for: Invalid format: "2017-02-16T20:01:08+00:00" is malformed at "+00:00"


Redeploying Rediscover.Work to Save Money Part 1

After five years working in startups, and more recently in 'stealth mode' startups I have decided to go back to my favourite place to work. This means that I can now share some of my hacking projects more openly.

At the moment one of my side projects which I want to keep alive is Rediscover.Work. It is currently hosted in AWS using Elastic Beanstalk and RDS. This made sense at the time when I was trying to get the site up quickly. Now that the project is no longer funded though it is time to do a bit of belt tightening.

As with any project it is worth looking at different options for how it could be deployed. There were never more than there are now:

* AWS Elastic Beanstalk + RDS. Current running cost of around $96 per month. + Easy install. -  Cost.
* AWS Lambda + RDS. + Simple scaling. - Migration costs are prohibitive.
* EC2 Instance: $9.52 according to the price calculator. + Already have needed tools - More admin to manage instance personally.
* Google App Engine and Cloud SQL: $8.01 according to the price calculator. + Ease of mantenance and app engine security scanning. - Migration to app engine layout.

As I already have all the tools needed to create the EC2 Instance solution and it requires the minimum investment in tooling I chose to go down that route first and see how things played out. In part 2 of this blog I will see how hard it is to migrate to app engine as this seems like a better solution for the long term.


  1. Build dependencies and update dependency versions using

mvn versions:display-dependency-updates

  1. Package the code into a WAR to be deployed on the new shared EC2 instance.
  2. Launch a new instance which will run the site and database.
  3. Start hacking ansible (and loose the will to live due to so many issues with config).
  4. Stop.
Look out for part 2 where I will probably decide to migrate the whole project as managing machines is getting too painful for this time poor old coder.


Wine Tasting at Majestic Camden

I recently attended my first wine tasting which was organised by my private Somelier. We  Definition wines from Majestic which it selects as reference examples of different wines. We also learned how to pair food, and my somelier won the competition to match food and wines. What great taste!

We tried:

Sauvingnon Blanc grown on Granite. Marlbrough NZ heavy yield fast ripening. Acidic. Some regions are full bodied like Chili. This is light, fresh and green, almost tastes like nettles.

Sauvingnon Blanc. San Cere. Loire. FR. darker and heavier flavored due to the soil. Bolder than the first sauvingnon Blanc due to the chalk soil. Easy drinking and balanced with less acidity.

Chardonnay. Chablis, Eon, France. Un-oaken. Stored in stainless steel temporarily before bottling. The oak barrelling would normally reduce the acidity. Light but zesty with more body than the sauvingnon Blancs.

Definition prosecco. Very light and medium dry, excellent with melon and prosciutto.

Pinot Noir NZ. Picking a great one can be tricky from a French range. This one is quite spicy and 
peppery. Some people say it is like grape juice or raisins. I like it.

Malbec Argentina. (Mendoza). Very full bodied. Smoky blackberry. Great with a steak frites.

Tempranillo Rioja ES. A blend of different vineyards. The grand reserve we tried was the older with around 36 months of oak aging. This was the last and heaviest, DOCG.
I'm looking forward to the next level of training!


Living with Android 4.* on Samsung S3

The Due to a screen fade issue with my Samsung S7 I recently had the chance to downgrade temporarily to my old Samsung S3.

The first thing that I noticed was how much smaller the old device is, both in terms of the physical device and screen real estate. Welcome to zooming and scrolling! From a development point of view it is all too easy to think of these two devices as the same format but that could be a mistake. It may be good to consider making the main view different for small handsets.
The second thing to hit me which was more profound was the performance, or lack of. I removed every app I didn't absolutely need. It turns out that still wasn't enough. If you are wondering why your older remain relatives or friends are so slow to message back; this is probably why. It is so tempting, as a developer using the latest handset, to think people will upgrade their handsets. Guess again, according to the official Google developer pages a massive 36.3% of android handsets are running KitKat L19 or lower as of January 2017. If you are building a mass market app this simply isn't a demographic that you can afford to ignore.

There are some tech companies who occasionally do downgrade testing for a large set of their employees. Facebook is one noteworthy example.

Many apps just can't run in usable time, by that I mean that they remind me of using a x286 without the Coprocessor. A particularly bad example was the twitter app which crashed every time that the "share content on social" hook was called to find contextual information.
The question remains, does developing for the latest hardware first make sense if you are trying to reach mass appeal or does making your developers start with the end in mind ensure your app will succeed?


Tying The Knot in London : How to Plan A Wedding

I'm really happy to announce that I am getting married. I have met the most amazing woman and we are tying the knot in January 2017!

It turns out that planning a wedding is really quite tough so I wanted to share a few tips on what we have learned so far!

  • You can buy wedding rings online > (yes those are our rings).
  • But, you need to shop for rings in person, our original ring choices were nothing like what we ended up with.
  • Check your wedding ring size when your hands are hot and cold, you don't want to end up with a ring you bought in February that you can't fit on when it comes to the big day.
  • Everything costs a lot more than you would expect. Make a budget and if you are really good, try to stick to it.
  • Start shopping for your wedding dress the day you get engaged, yes it can take that long.
  • If you can keep your party small then you can go big on the venue and get somewhere really special.
  • Photography can cost a lot more than you think, in London wedding photographers charge around £150 per hour!
  • Don't forget while planning and organising for all of your guests that it is still your day. 
  • Create a checklist of everything you want to get done. Start from high level decisions like:
    • Which country will we get married in.
    • Are we going for a church or registry office.
    • What is the overall budget.
    • How many guests will be invited.
  • Don't eat for 24 hours before your tasting dinner for the reception so you can manage everything on the menu.
  • Find a partner who doesn't want a Hollywood wedding then give her the best day of her life!
Some of the sites that really helped us were:
More tips to come...


Discovering The Beauty of Tuscany in Italy

Italy is without exception my favourite place to visit and my last trip was no exception. There are some really stunning places but my favourite from my last trip was Montepulchano.

The town it's self is small and can easily be walked around in a matter of hours but to really get to know this gem of a spot you have to spend a little more time getting to know your way around.

One definite attraction are the wine cellars of this great old wine growing region.

As well as the beauty of this gorgeous town our accommodation at Residence Fabroni really made the visit special. Ombretta, the manager of the property was probably the most hospitable host you could ask for. The view from the balcony was worth the trip alone.

As well as the Tuscany wine region the trip also included Florence and Verona, the town famous for it's part in Romeo and Juliet. The restaurants definitely seemed to favour eating in couples!

More photos coming soon when I manage to find a USB card reader!


QA for the Customer

Some time ago I had the chance to work wuth a development team which had separate QA resource. The team was working in the standard "over the wall" mentality. Developers do stuff, QA engineers pick it apart and so on.

I heard a phrase which really gave me pause:

"QA is working on behalf of the customer"

This sounds great, the QA engineers really take ownership of thinking about the customer. What does it say about the development team motives? Aren't they thinking about the customer?

In the teams I have helped shape I have always avoided dedicated QA because I suspect it creates an environment where people defer responsibility. On the other hand it might mean that a stretched development team can roll out faster as they can rely on QA to catch the issues. That might sound plausible but in the world of ever accelerating deployment can manual testing really keep up?


Recruiting Full Stack Developer

I recently joined www.parktechnology.com as the CTO.

One of the great pleasures of my role is to start hiring a technology team. The first role we are trying to recruit for is a 'Full Stack Developer'. We will also be hiring for Android and IOS developers too so watch this space.


We were really lucky to be able to find two really strong candidates who have a wealth of experience. Exciting times!

Blog Draft Ratio: Should We Publish Everything

Recently I have had a proliferation of ideas about blog posts to write. The challenge is knowing which are really interesting to readers. One thought was to publish every draft as it is then let views / votes / comments decide which to refine. Does anyone want this?


Random Damage

One of the hardest things to rationalise are random acts. Today I found someone had pushed my motorbike over. Unfortunately there was quite a bit of damage. I still need to strip down the rear fairing which is cracked,  but the damage already stands at over £350.

With insurance premiums spiralling and excess now nearly half the value of the bike,  maybe it is time to let it go. You just can't keep anything nice.


How to Apply Sales CRM to Job Hunting

After reading a number of books on sales about using technology like Customer Relationship Management (CRM) tools to measure and qualify your sales funnel to increase the predictability of leads and opportunities turning into offers and contracts.

By leads I'm talking about people or companies I haven't contacted or worked with.  Once they are qualified then they become opportunities. This seems to be a common distinction made in the sales industry. The terminology seems better suited to sales and marketing but for the sake of speed I'm going to use them here.

Many sales organisations also split the tasks of generating leads and developing them into opportunities. My lead generation activities included:

  • Attending conferences and fairs.
  • Researching top companies via lists from sites like glassdoor and the guardian top graduate employers lists.
  • Through my network trying to identify potential companies.
From these lists of companies the qualification step is quite different from an outbound sales team approach. In a sales team the lead generation hand over happens after the lead generation team has made contact and qualified that the potential customer is in the market to buy. The opportunity team then focuses on 'closing' the deal.

In the job hunting context this looked like researching the companies and identifying whether they are hiring for a good work match. To stretch the metaphor further the HR, talent team or recruiters are the 'gate keepers'. Initial calls would be with HR for example and once it seemed there was a potential there and the discussion was involving the leadership team of the company they were converted to opportunities.

In keeping with the predictable revenue approach I tried to keep track of how many companies I found were in market. For me it was around 20%. This helped me figure out how many companies I needed to research to feed my opportunity funnel.

Using my CRM tool I set up a opportunity funnel with stages for:
  1. Introduction Email.
  2. Initial Call. 
  3. Send CV.
  4. On Site Interview(s).
  5. Proposal Meeting.
  6. Offer Review.
  7. Accepted.
This was really helpful as I could prioritise based on likelihood of getting an offer and how far along the process I was. Being able to track the attrition also gave me an idea how many company lead generation / qualifications I needed to do in order to 'fill' my week. The goal being to have 5 - 10 on site interviews per week.


Fix Found for Foundation 5 Syntax error unrecognized expression

If you are using Zurb Foundation 5 and are seeing this error message:

Uncaught Error: Syntax error, unrecognized expression: [data-'Times New Roman'-dropdown]

Then check out this solution from http://jelaniharris.com/2014/fixing-foundation-5s-unrecognized-expression-syntax-error/


Installation Issues of Ubuntu 15.10 and 14.04 on Lenovo X1 Carbon

I recently bought a Lenovo X1 Carbon and had a bit of a 'mare' installing Ubuntu (both 15.10 and 1.04). I had error messages like:

  • Unable To Find A Medium Containing A Live File System
  • ACPI PPC Probe failed 

And some other issues which aren't possibly related. The bottom line is that it wasn't possible to install ubuntu from USB disk as the live CD medium wasn't found. 

There is an open bug with usb3 disks and I found that if I forced my laptop to treat the ports as USB2 (disable USB3 in the bios) everything worked fine. If you are having this issue and this post helps feel free to tweet me your dmesg issues so I can add them to the list!


Which is the next hot technology for full stack developers?

I am trying to compile a list of technology stacks that full stack developers are using and could in the future become part of a recognised approach like LAMP. Please tweet me your suggestions...

  • React / Go / Mongo : MGR 
  • React / Node / Mongo: RNM
  • Angular / Ruby / Mongo : MRA
  • Ruby / Rails / Postgres : RRP
How about layout technologies? The main runners seem to be:

  • Foundation
  • Bootstrap
Then there is view / control systems like:
  • Backbone
  • Angular
  • Ember