The impact of a changing application landscape on your data architecture
Over the last few years, digitization and innovation have had an enormous impact on the application landscape. A company’s application architecture used to be relatively simple, but that is no longer the case. Numerous cloud solutions, rented on a monthly basis, now complicate things to the point where it’s no longer obvious which data is kept where. Combine this trend with the shift towards self-service applications from a data consumption perspective and the impact on data architectures is inevitable. In this blog post, we’ll dive deeper into this (r)evolution in the world of data and the impact of a changing application landscape on your data architecture.
Keeping an open data architecture
‘Data’ is a broad concept and includes an incredible amount of domains that all require specific knowledge or some sort of specialization. There are plenty of examples: data architecture, data visualization, data management, data security, GDPR, and so on. Over the years, many organizations have tried to get a grasp on all these different ‘data domains’. And this really isn’t a cakewalk, since innovative changes are taking place in each of these domains. Additionally, they often coincide with other and newer concepts such as AI, data science, machine learning, and others.
In any case, it’s preferable to keep your vision and data architecture as ‘open’ as possible. This keeps the impact of future changes on your current implementation as low as possible. Denying such changes means slowing down innovation, possibly annoying your end-users and vastly increasing the chance of a huge additional cost a few years down the line when the need to revise your architecture can no longer be postponed.
Modern applications complicate combining data
The amount of data increases exponentially every year. Moreover, the new generation of end-users is used to being served at their beck and call. This is a trend that the current application landscape clearly supports. Within many applications, its software vendors offer data in real-time in an efficient, attractive and insightful way. Huge props to these vendors of course, but this poses additional difficulties for CIOs to deliver combined data to end-users.
“What is the impact of a marketing campaign on the sale of a certain product?” Anwering a question like this poses a challenge for many organizations. The answer requires combining data from two (admittedly well-organized) applications. For example, Atlassian offers reporting features in Jira while Salesforce does the same with its well-known CRM platform. The reporting features in both of these software packages are actually very detailed and allow you to create powerful reports. However, it’s difficult to combine this data into one single report.
Moreover, besides well-structured Marketing and Sales domains, a question like that requires an overarching technical and organizational alignment. Which domain has the responsibility or the mandate to answer such a question? Is there any budget available? What about resources? And which domain will bear these costs?
Does Self-Service BI offer a solution?
In an attempt to answer such questions, solutions such as Self-Service BI introduced themselves to the market. These tools are able to simply combine data and provide insight their users might not have thought of yet. The only requirement is that these tools need access to the data in question. Sounds simple enough, right?
Self-Service BI tools have boomed the past few years, with Microsoft setting the example with its Power-BI. By making visualizations and intuitive ‘self-service data loaders’ a key component, they were able to convince the ‘business’ to invest. But this creates a certain tension between the business users of these tools and CIOs. The latter slowly lose their grip on their own IT landscape, since a Self-Service BI approach may also spawn a lot of ‘shadow-BI’ initiatives in the background. For example, someone may have been using Google Data Studio on their own initiative without the CIO knowing, while that CIO is trying to standardize a toolset using Power-BI. Conclusion: tons of data duplication, security infringement and then we haven’t even talked about GDPR compliance yet.
Which other solutions are there?
The standard insights and analytics reports within applications are old news, and the demand for real-time analytics, also known as streaming analytics, is rising. For example, during online shopping, stores display their actual stock of a product on the product page itself. Pretty run-of-the-mill, right? So why is it then so hard to answer the question regarding the impact of my marketing campaign on my sales in a report?
The demands and needs for data are changing. Who is the owner of which data and who determines its uses? Does historical data disappear if it’s not stored in a data warehouse? If the data is still available within the application where it was initially created, how long will it still remain there? Storing the data in a data lake or data repository is a possible cheap(er) solution. However, this data is not or hardly organized, making it difficult to use it for things like management reporting. Perhaps offloading this data to a data warehouse is the best solution? Well-structured data, easily combined with data from other domains and therefore an ideal basis for further analysis. But… the information is not available in real-time and this solution can get pretty costly. Which solution best fits your requirements?
As you’ve noticed by now, it’s easy to sum up a ton of questions and challenges regarding the structuring of data within organizations. Some data-related questions require a quick answer, other more analytical or strategic questions don’t actually need real-time data. A data architecture that takes all these needs into account and is open to changes is a must.
We believe in a data approach in which the domain owner is also the owner of the data and facilitates this data towards the rest of the organization. It’s the responsibility of the domain owner to organize their data in such a way that it can provide an answer to as many questions from the organization as possible. It’s possible that this person doesn’t have the necessary knowledge or skills within their team to organize all of this. Therefore, a new role within the organization is necessary to support domain owners with knowledge and resources: the role of a Chief Data Officer (CDO). They will orchestrate everything and anything in the organization when it comes to data and have the mandate to enforce general guidelines. Research shows that companies that have appointed a CDO are more successful when rolling out new data initiatives.
ACA Group commits itself to guide its customers as best as possible in their data approach. It’s vital to have a clear vision, supported by a future-proof data architecture: an architecture open to change and innovation, not just from a technical perspective, but also when it comes to changing data consumption demands. A relevance to the new generation, and a challenge for most data architectures and organizations.