Developing features the right way is hard. Most of the time, adding features with no criteria is going to make the software more complex, both for the users and for the developers working on it — and complexity is almost never good for business. Here at Labcodes one of our main goals is to help our clients understand the impact of every new feature on their user base. To make sure that new features don’t break the system or unnecessarily make it more complex, we use well established development and delivery processes, starting with design.
When we talk about design, we’re talking about surface level planning of the task, as well as its visual and coding. Before we start building any feature, we try to measure its real value for the final user. Through research, we understand which client profile will be the most impacted, which problem does the feature solve, how is the competition addressing the issue and what is the developing cost (time) for the sprint. Doing this before any execution steps guarantees that we’ll be focusing on the absolute musts, on the final user’s perspective, as well as prioritizing features by value delivered and time spent on the execution.
With all that research on our hands, we start prototyping and testing with users. At this moment, we make sure that whatever we understood from the research is spot on, and that the feature is clear and intuitive. This way, we don’t generate reworks, therefore saving money from our clients. After all, it is way faster to redo a prototype after a feedback then to change something from a software. With the feature validated, we proceed to the definitive design, and the feature is ready to be coded.
Every piece of software we make is thoroughly tested. We understand that these tests are the first design feedback from our code, and, like any other feedback, the sooner we have them, the better. We follow the best procedures, that guarantee the internal and external quality of the product, making it ready to be futher implemented and scaled.
But our job isn’t done when the feature is delivered. We establish metrics to evaluate how these features are being used, and we have an efficient analysis of all of our delivery pipeline. This way, we generate more data, empowering our future decisions, whether about what is the next feature we should work on, or about the value said feature would have for the user.
What do you think about the way we make software? Do not hesitate in sending us an e-mail [email protected], or talking to us anywhere in the event. We’ll be having three different talks throughout DjangoCon and we’re always open to sharing knowledge. We hope to see you soon and thanks for the attention!