Working with Initforthe: What does a successful software development project really look like?

We've been spending a bit of time recently reflecting on the way we work, and how that translates into highly successful projects for our clients, and ones that our team love to work on. I wanted to share some of that learning because it really does highlight some of the important aspects of our process and culture that make us "us", and what you should expect if you choose to work with us, either as a client, or as a part of the team.

We've written about our Agile development process, and how we follow the Agile Manifesto before. Those things might tell you about the day to day operation of a project, but they don't get to the nub of how we actually end up with a project that everyone can be proud of, and better still, smile at!

Let's dive in...

Never making assumptions

Never making assumptions is the backbone of everything we do. Humans are brilliant at making assumptions about all sorts of things, so it's quite a challenge for us to actively fight against that. We assume what others are thinking, about what particular behaviours mean, and we find ourselves projecting those assumptions on to others without those beliefs necessarily being accurate. When it comes to our work, this becomes a two-way truth.

We will not make assumptions about our clients' businesses, their people, customers or ambitions. This translates into us asking lots of deep and challenging questions about the business, and prying into its inner workings. Clearly that takes a lot of trust; we're learning about our clients' innermost secrets and those do not want to be laid bare for all to see, at least not by us.

In asking those questions, it's important too that our clients don't make assumptions about our current level of knowledge in the answers that they give, because any assumptions made only leave the door open for misinterpretation, and thus: project failure. Not a good outcome for our client. Not a good outcome for us.

Understanding your business

Learning about the business of a client is the beginning of any project. If you ask any builder, the first lesson they teach their apprentice is this: measure twice, cut once. What that really means is you have to have a detailed, well written plan. So here comes the barrage of questions. We really do ask a lot of them (sorry, not sorry)!

We want to learn about the business. What does its future look like? What goals and aspirations are there? What does it do today? We need to understand that in real depth. How does the business work? How do you get from one place to the other, with a happy customer at the end of it. And is the end really the end, or is there something that comes after that? What parts of that are difficult, or frustrating right now?

I've written before about understanding the value of automation. How long do those frustrating things take to do? How many times do they happen in a day/week/month? How much money does that cost? What is the opportunity cost - how much more business could be done if that time was freed up?

Getting answers to all these questions gives everyone a key piece of knowledge: what part of the business can we disrupt in order to get the maximum return on investment as quickly as possible? This will invariably lead to yet more questions around decision making and how the information flows between the business and individual people. This helps to document your actual, real world business processes into process maps. We're not trying to document your whole business (yet), but rather the area we're focusing on, and its touch points with others (people, departments, suppliers, customers).

Understanding "the customer"

As to getting to the bottom of exactly how the job gets done, we need to speak to the people who are involved. We call them "the customer" because ultimately they're the users who interface with the system in some way. They could be a member of the team, an actual customer, or part of the supply chain, but if the business needs to communicate with them in some way to get the job done, they are "the customer".

To that end, what do we need to learn about "the customer"? We need to understand where they might be at the point at which they come into contact with the system, and what else might be on their plate. We need to learn about their priorities, so that we can slot in to their task list in the least disruptive way. We need to learn about the technology they have to hand at that point in time, so we can design an interface to suit that purpose (it's no use building a desktop screen full of form fields if the user is going to have a mobile phone to hand, or narrow mobile view if they're sat at a desk).

And then we need to learn about what that person is trying to achieve, both in work now, and in life longer term. Because part of our role building automation into businesses means helping people transition into different, more fulfilling areas of work. Our clients don't get rid of people off the back of the automation work we do. If they do, we fire them. People carry knowledge around with them that is the most valuable asset a business can possess. As a result, those clients who leverage the skill and ambition of its collective team make the biggest returns on their investment.

How do we actually learn about the customer? Literally, by talking to them, and asking them lots more questions to find out about them and what we can do to make their lives (certainly at work) easier.

Building in collaboration

So we know what we need to build now, right? Hold your horses! These cats won't skin themselves, and there are a million and one possible ways to do it! What we do have now is a hopefully fairly solid understanding of the business problem we're trying to solve, the people and processes involved in that problem (and by extension, the solution), and we've probably built up a mental architecture as to how we're going to make the solution a reality.

Now for user stories. For those not in the know, User Stories are little instructional requirements that begin the conversation of "how" and "when", given we now understand "why" that solution needs to be created. Take for example the following user story:

As a customer I want to receive a notification when my lease is near to expiry

It creates a whole load more questions: what is "near"? How does the customer want to receive the notification? Do we want to give them a choice? How do we present that choice to them? If we send an SMS, what options do we have for sending it and do we want to handle any responses? That's the first few that came to my head, but the list of questions is a long one, and until we have answers to all the forks in the road, we won't be able to get going. That's the collaboration - we can't answer those questions, but we can advise on best practice, and help guide a positive customer experience.

Once we've done that, we start to build. Usually for more complex stories, we will stub out a mock version of the interface at each point along the way so that we can get immediate feedback from the customer (remember, that's the user of the system at that point). We'll work with that person (or people) to make sure that it works for them, and once everyone is happy that the problem at hand is solved, we build it out and move on to the next problem. Rinse, repeat.

Usually, we're in almost constant dialogue with our clients and the people in the organisation. We're usually in less constant dialogue with their own customers, but we do engage with them to make sure that what we're serving them works for their needs. Ultimately, our client is part of our team, and we're part of theirs.

Test, test, test, then test again

Here's the interesting thing. In a previous job, back in 2000, we used to have a bank of computers of varying shapes, sizes and operating systems on which we tested websites we built. The heady days of making a change on the 7th floor, and having to go down to the 3rd to test that one liner on 7 different machines, across 4 different browsers each. These days we automate much of that, and most browsers today are based on the same underlying architecture (something called WebKit for those in the know), which means that Safari, Chrome, Edge and Opera are now pretty much the same browser, with Firefox the outlier.

By automating tests, we can validate that the system works, but it also provides a number of other benefits down the road. When you want to build a new feature, the classic problem we hear from clients is that "their old developers broke the system" whilst building the new thing. Automated tests help to prevent that by ensuring that what works continues to work. If it fails the tests, it isn't allowed near a user.

Further, because software is a forever changing world, we often need to upgrade the underlying dependencies (libraries of code that the project relies on to function) to newer versions. Without automated tests, there's no way to validate that the upgrade was successful and that nothing broke, but with them, not only can you determine if everything works, but if it doesn't, what specifically broke, and what needs to be done to fix it. That makes upgrades much less painful, and allows the system to be kept up to date more regularly and with more confidence. Which in turn means that it never has to become so old that it just needs to be thrown away and started again. And that gives you business continuity.

Our expertise, and where we add value for our clients, is really all about learning, applying that knowledge by using (or specifically not using) technology to solve the need, and testing against that learning. Whilst there has to be importance placed on money (its availability) and deadlines (typically external, and outside of your control), for us at Initforthe, the measure of success for a project is not defined by either of those things, but rather by the raw human emotion created - has it fulfilled a need that existed, has that need been accurately understood and solved, and does it spark joy for those who use it? If not, that's a failure in my book.