• ChevronChevron
    How We Built a Free Credit Repor...

How We Built a Free Credit Report

We recently launched our new free credit report on TotallyMoney and we had to do it in a fairly short time with a small team.  To be fair, small, focused teams is the way we prefer to work but it does mean we have to make careful choices about tools and technology in order to enable the team to move quickly.  In this post I’ll summarise some of the choices we made and try to explain why we made them.

Before we get to technology it’s important to mention the way we approach development.  Of course, we’re employing all the good practices like testing, continuous integration and so on but we try to push this a little further and aim for continuous deployments.  We try to deploy to production as early as possible (day 1 ideally) and then to deploy multiple times every day thereafter.  This gets rid of the ceremony of deploying and it makes it commonplace.  Our product managers also do a great job of breaking work down into small, deployable features for the developers and without this continuous deployment would be impractical.  We also use feature toggles to allow us to push features to production but to turn them off if we don’t want to functionality yet.  All of the above is done to create an environment in which a team can move quickly to create a large number of features in a short time.

That brings us to the technology.  Again, what is paramount is using things that allow us to go fast.  The foundation of our tech stack is Amazon Web Services and AWS, or any cloud environment really, is incredibly important if you want to spin up infrastructure quickly and have flexibility to make changes as the architecture develops.  We used the EC2 Container Service with Docker so that we could create our infrastructure as code.  Using ECS has some nice additional benefits like making disaster recovery incredibly easy, or almost a non-issue.  Because we’re hosting our Aurora database in RDS we get the same level of resilience at the storage tier.  With all of this we also get to take advantage of auto-scaling which will be useful as we grow.  The choices we made regarding infrastructure really help us to keep the team lean and focused on creating features with business value as opposed to doing technical tasks just to keep things running.

That’s the infrastructure - the foundation - so let’s move on to the way we compose the application, or the architecture.  We’ve followed a microservices style and tried to split up the application into a number of simple parts.  On the surface we have a fairly simple website but there are some interesting things going on behind the scenes.  For example, there’s a scheduling component that sits in the background and kicks off various actions at predetermined times.  We use SQS and a product called Hangfire to help with that.  It means we don’t have to worry about a lot of the logic for the mechanics of scheduling as Hangfire takes care of it for us.  We split up the parts of the application into services with a single responsibility and host them on their own instance which means we can make changes to any one thing safe in the knowledge we’re not affecting anything else.  These kinds of choices are vital when you have a compact team and a deadline.

Finally, I’ll briefly mention the major languages and frameworks we use.  The choices we made here were essentially pragmatic ones, with the intention of reducing risk.  In other words, we stuck to what we know.  On the front-end we use React and Redux along with canopy for testing.  We’ve used the same combination elsewhere in TotallyMoney and we have a decent amount in expertise so they were obvious choices.  

Plus, we like React and as we knew we were going to build native apps we wanted to take advantage of React Native.  Finally, our back-end services are built using C# and F#.  We’ve been using .NET for many years now so these felt like solid choices. We went live on the day we planned to after about 6 months of development which is testament to the work of the team.  It’s unusual for us to plan in this way, typically we’d just go live as quickly as we could.  However, in this instance we were working with a partner so the scheduling was necessary.  We’ve been live for a couple of months now and are steadily ramping up the traffic with no issues so far.  We hope the traffic continues to grow, and the rate of the growth increases as our marketing does, and we are confident we’ve built an application that can handle that.