About Wise Music 

Wise Music Group is an international conglomerate of wholly-owned companies and is home to some of the world’s leading independent classical music publishing houses, as well as a number of pop music catalogs. In 2023, they celebrated their golden jubilee; over these 50 years, they have built and acquired over sixty international publishing houses and more than thirty notable imprints, many of which are iconic household names such as Thelonious Monk, David Bowie and The Zombies.

With their extensive repertoire of both commercial and classical music, including a wealth of original samples, Wise Music had previously faced challenges in fully capitalizing on its vast catalog of works due to catalog conflicts and untracked or unclaimed usage. 

In January 2021, Wise Music partnered with Orfium, making a crucial step towards more effectively managing their works on YouTube, streamlining their catalogs and boosting revenues for their artists and songwriters. 

Orfium’s Solution 

Orfium undertook the full management of Wise Music’s YouTube account in the United States to ensure they are optimizing it and maximizing their revenue potential. The services provided by Orfium include:

Results 

Wise Music achieved an incredible feat by more than doubling its YouTube revenue in just one year of partnering with Orfium. They have also accomplished 588k recordings matched in just 32 months and resolved 654K conflicts while updating their outstanding catalogs.

We’ve been very impressed with Orfium’s team and technology. They are experts in their field and are constantly working to increase the value of YouTube revenues for our business in the United States. We feel confident knowing they’re managing our CMS to ensure all of our songwriters and artists are getting the most value from their work.

David Holley, CEO, Wise Music Group

Could your music catalog be generating more revenue across UGC platforms? Talk to Orfium today to find out.

Talk to Orfium Today

Cavendish Music, the UK’s largest independent production music publisher, traces its origins back to 1937 as Boosey & Hawkes Recorded Music Library. As part of the Anthem Entertainment family, Cavendish’s music has made its mark in numerous productions, including major motion pictures, television shows, video games, and commercials.

In January 2023, Cavendish joined forces with Orfium to manage its YouTube administration, streamline its catalog management, and maximize revenues using Orfium’s advanced AI and Machine Learning (ML) technology across UGC platforms.

Cavendish initially teamed up with Orfium on a trial basis to see what impact they could have on their mega-hit ‘Funny Song.’ Despite billions of views and widespread popularity across TikTok, YouTube and other UGC platforms, the numbers weren’t adding up. Cavendish wanted to explore a better YouTube management and monetization strategy, and Orfium was keen to show them what we could do.

After assigning the Funny Song asset to Orfium, Cavendish’s YouTube revenue skyrocketed by 500% within five months. In April alone, Orfium had tripled the revenue generated on ‘Funny Song’ in the whole year prior!

Impressed by the numbers and the communication at every stage of the process, Cavendish moved its entire catalog over to Orfium and they’re seeing positive results across the whole repertoire. Cavendish’s sister companies have now joined Orfium as well.

Orfium’s full management service for Cavendish includes:

Manual Claiming on YouTube: Orfium’s team of copyright experts employs AI and Machine Learning (ML) technology to identify unlicensed usage of “Funny Song” and Cavendish’s other works. They strategically place claims on videos that go undetected by YouTube’s fingerprinting technology. This targeted approach captures the most valuable videos.

Licensing: Leveraging SyncTracker, Cavendish automates the entire licensing process, allowing them to serve creators on YouTube without the need for a lengthy claims process. Orfium enables licenses on a per-video or channel basis, providing reporting and full visibility into the usage of Cavendish’s catalog.

Catalog Management: Orfium uses its RightsCloud technology to optimize Cavendish’s catalog management, metadata, and claiming. We create a comprehensive rights picture for each song, ensuring fully optimized claiming across all territories with the cleanest, most up-to-date metadata available.

Data & Reporting: Orfium generates detailed performance reports, offering Cavendish full visibility into the health and performance of their catalog. With actionable insights, Cavendish can drive more revenue and growth, confident in the knowledge that their catalogs are being managed and monetized correctly.

Results:

Within five months of partnering with Orfium for catalog management and licensing, Cavendish experienced remarkable results:

…and all of this within the first five months of the partnership. Lots more to come.

In Cavendish’s words…

“Orfium took on “Funny Song” at the beginning of January, and by the end of the January statement, we were simply gobsmacked. We have been gobsmacked every single month since then. Orfium has exceeded our expectations, and we are genuinely happy with the partnership.”

Gareth Owen, Managing Director, Cavendish Music

Could your music catalog be generating more revenue across UGC platforms? Talk to Orfium today to find out.

Talk to Orfium Today

This week, we announced our partnership with PRS for Music, the leading music rights organization. This collaboration is set to revolutionize the music industry in Africa as PRS expands its licensing coverage to music users based in Africa supported by Orfium’s licensing and technology infrastructure. This will provide access to tens of millions of works, including many of the most successful songs and compositions of today and the last century.  The thriving African music industry is set to experience a boost in innovation, efficiency, and growth through this collaboration, which will ultimately benefit the talented songwriters, composers, and publishers across the continent as well as the music users.

Orfium will be licensing the PRS repertoire and providing the technology infrastructure needed to efficiently serve the African music market. The partnership marks a significant milestone, paving the way towards equitable remuneration for talented songwriters, composers, and publishers across Africa for their creative work.

This partnership is expected to bring about significant benefits including increased speed to market for music creators, improved music discovery, as well as cost efficiencies

An exciting aspect of this partnership is the expansion of PRS for Music’s Major Live Concert Service, the royalty collection service for large concerts, which will now be available for events held across Africa. This is a significant step in expanding the global reach of PRS for Music’s services and supporting music creators in Africa. 

In addition to PRS’s existing agreement with SAMRO, the collecting society based in South Africa, the partnership with Orfium will provide a framework for PRS members to be paid when their works are used in some of the world’s fastest-growing music markets. This is a crucial development that ensures songwriters, composers, and publishers are fairly compensated for their music across the African continent.

“We’re incredibly excited to partner with PRS for Music. Orfium exists to support and improve the global entertainment ecosystem so that creators everywhere can be paid fairly for their work. Over the last three years, we have invested heavily in building a state-of-the-art rights management platform to support our partners in the licensing and remuneration of music rights in the entertainment industry. Orfium looks forward to working with PRS as their trusted partner to support this incredible region and contributing to Africa’s future as a high-growth music market.” Rob Wells, CEO, Orfium

Stay tuned to the latest news and updates from Orfium. Subscribe to our newsletter below.

We’re back

As described in a previous blog post from ORFIUM’s Business Intelligence team, the set of tools and software used varied as time progressed. A shorter list of software was used when the team was two people strong, a much longer and more sophisticated one is being currently used.

2018-2020

As described previously, the two people in the BI team were handling multiple types of requests from various customers from within the company and from ORFIUM’s customers. For the most part, they were dealing with Data Visualization, as well as a small part of data engineering and some data analysis.

Since the team was just the two of them, tasks were more or less divided in engineering and analysis vs visualization. It is simple to guess that in order to combine data from Amazon Athena and Google Spreadsheets or ad-hoc csv’s, a lot of scripting was used in Python. Data were retrieved from these various sources and after some (complex more often than not) transformations and calculations the final deliverables were some csv’s to either send to customers, or load on a Google Sheet. In the latter case a simple pivot table was also bundled in the deliverable in order to jump start any further analysis from the, usually internal, customer.

In other cases where the customer was requesting graphs or a whole dashboard, the BI team was just using Amazon Athena’s SQL editor to run the exploratory analysis, and when the proper dataset for analysis was eventually discovered, we were saving the results to a separate SCHEMA_DATASET in Athena itself. The goal behind that approach was that we could make use of Amazon’s internal integration of their tools, so we provided our solutions into Amazon Quicksight. This seemed at that moment the decision that would provide deliverables in the fastest way, but not the most beautiful or the most scalable.

Quicksight offers a very good integration with Athena due to both being under the Amazon umbrella. To be completely honest, at that point in time the BI Analyst’s working experience was not optimal. From the consumer side, the visuals were efficient but not too beautiful to look at, and from ORFIUM’s perspective a number of full AWS accounts was needed to share our dashboards externally, which created additional cost.

This process slightly changed when we decided to evaluate Tableau as our go-to-solution for the Data Visualization process. One of the two BI members at that time leaned pretty favorably towards Tableau, so they decided to pitch it. Through an adoption proposal for Tableau, which was eventually approved by ORFIUM’s finance department, Tableau came into our quiver. Tableau soon became our main tool of choice for Data Visualization. It allows better and more educated decisions to be made from management, and is able to showcase the value that our company can offer to our current and potential future clients.

This part of BI’s evolution led to the deprecation of both Quicksight and python usage, as pure SQL queries and DML were developed in order to create tables within Athena, and some custom SQL queries were embedded on the Tableau connection with the Data Warehouse. We focused on uploading ad-hoc csv’s or data from GSheets on Athena, and from there the almighty SQL took over.

2021- 2022

The team eventually grew larger and more structured, and the company’s data vision shifted towards Data Mesh. Inevitably, we needed a new and extended set of software.

A huge initiative to migrate our whole data warehouse from Amazon Athena to Snowflake started, with BI’s main data sources playing the role of the early adopters. The YouTube reports were the first to be migrated, and shortly after the Billing reports were created in Snowflake. That was it, the road was open and well paved for the Business Intelligence team to start using the vast resources of Snowflake and start building the BI-layer.

A small project of code migration so that we use the proper source and create the same tables that Tableau was expecting from us, turned into a large project of restructuring fully the way we worked. In the past, the python code used for data manipulation and the SQL queries for the creation of the datasets to visualize were stored respectively in local Jupyter notebooks and either within View definitions in Athena or Tableau Data source connections. There was no version control; there was a Github repo but it was mainly used as a code storage for ad-hoc requests, with limited focus on keeping it up to date, or explaining the reasoning of updates. There were no feature branches, and almost all new commits on the main one were adding new ad-hoc files in the root folder and using the default commit message. This situation, despite being a clear pain point of the team’s efficiency, emerged as a huge opportunity to scrap everything, and start working properly.

We set up a working guide for our Analysts: training on usage of git and Github, working with branches, PullRequest templates, commit message guidelines, SQL formatting standards, all deriving from the concept of having an internal Staff Engineer. We started calling the role Staff BI Analyst, and we indeed currently have one person setting the team’s technical direction. We’ll discuss this role further in a future blog post.

At the same time we were exploring options on how to combine tools so that the BI Analysts are able to focus on writing proper and efficient SQL queries, without having to either be fully dependent on Data Engineers for building the infrastructure for data flows, or requiring python knowledge in order to create complex DAGs. dbt and Airflow surfaced from our research and, frankly, the overall hype, so we decided to go with the combination of the two.

Initially the idea was to just use Airflow, where an elegant loop would be written so that the dags folder would be scanned, and using folder structures and naming conventions on sql files, only a SnowflakeOperator would be needed to transform a subfolder in the dags folder to a DAG on the AirflowUI, with each file from the folder would be a SnowflakeOperator task, and the dependencies would be handled by the naming convention of the files. So, practically a folder structure as the one shown to the right would automatically create a Dynamic Dag as shown on the left.

No extra python knowledge needed, no Data Engineers needed, just store the proper files with proper names. A brief experimentation with DAGfactory was also implemented but we soon realized that the airflow should just be used as the orchestrator of the Analytics tasks, and the whole analytics logic should be handled by something else. All this was very soon abandoned when dbt was fully onboarded to our stack.

Anyone who works in the Data and Analytics field must have heard of dbt. And if they haven’t already, they should. This is why there is nothing too innovative to describe about our dbt usage. We started using dbt from early on in its development, having first installed v0.19.1, and with an initial setup period with our Data Engineers, we combined Airflow with dbt-cloud for our production data flows, and core dbt CLI for our local development. Soon after that, and in some of our repos, we started using github actions in order to schedule and automate runs of our Data products.

All of the BI Analysts in our team are now expected to attend all courses from the Learn Analytics Engineering with dbt  program offered at dbt Learn regarding its usage. The dbt Analytics Engineering Certification Exam remains optional. However, we are all fluent with using the documentation and the slack community. Generic tests dynamically created through yml, alerts in our instant messaging app in case of any DAG fails, and snapshots are just some of the features we have developed to help the team. As mentioned also above, our Staff BI Analyst plays a leading role in creating this culture of excellence.

There it was. We endorsed the Analytics engineering mindset, we reversed the ETL and implemented ELT, thus finally decoupling the absolute dependency on Data Engineers. It was time to enjoy the fruits of Data Mesh: Data Discovery and Self Service Analytics.

2023-beyond

Having more or less implemented almost all of ORFIUM’s Data Products on Snowflake with proper documentation we just needed to proceed to the long awaited data democratization. Two key pillars of democratizing data is to make them discoverable and available for analysis by non-BI Analysts too.

Data Discovery

As DataMesh principles dictate, each data product should be discoverable by its potential consumers, so we also needed to find a technical way to make that possible.

We first needed to ensure that data were discoverable. For this, we started testing out some tools for data discovery. Among the ones tested was Select Star Select Star,, which turned out to be our final choice. During the period of trying to find the proper tool for our situation, Select Star was still early in its evolution and development so, after realizing our sincere interest, they invested in building a strong relationship with us, consulting us closely when building their roadmaps, while having a very frequent communication looking to get our feedback as soon as possible. The CEO herself Shinji Kim was attending our weekly call helping us make not just our data discoverable to our users, but the tool itself easily used by our users in order to increase adoption.

Select Star offered most of the features we knew we wanted at that time, and it offered a quite attractive pricing plan which went in line with our ROI expectations.

Now, more than a year after our first implementation, we have almost 100 users active on Select Star, which is a pretty part of the internal Data consumer base within ORFIUM, given that we have a quite large operations department of people who do not need to access data or metadata.

We are looking to make it the primary gateway of our users to our data. All analysis, even thoughts, should start by using Select Star to explore if data exist.

Now, data discovery is one thing, and documentation coverage is another thing. There’s no point in making it easy for everyone to search for table and column names. We need to add metadata on our tables and columns so that the search results of Select Star parse that content too, and provide all available info to seekers. Working in this direction we have established within the Definition of Done of a new table in the production environment a clause that there should be documentation on the table and the columns too. Documentation on the table should include not only technical stuff like primary and foreign keys, level of granularity, expected update frequency etc, but also business information like what is the source for the dataset, as this varies between internal and external producers. Column documentation is expected to include expected values, data types and formats, but also business logic and insight.

The Business Intelligence team uses pre-commit hooks in order to ensure that all produced tables contain descriptions for all the columns and the tables themselves, but we cannot always be sure of what is going on in other Data products. As Data culture ambassadors (more on that on a separate post too), BI has set up a data coverage monitoring dashboard, in order to quantify the Docs coverage of tables produced by other Products, raising alerts when the coverage percentage falls below the pre-agreed threshold.

Tags and Business and Technical owners are also implemented through Select Star, making it seamless for data seekers to ask questions and start discussions on the tables with the most relevant people available to help.

Self-Service Analytics

The whole Self-Service Analytics initiative in ORFIUM, as well as Data Governance, will be depicted in their very own blog posts. For now, let’s focus on the tools used.

Having all ORFIUM Data Products accessible on Snowflake and discoverable through Select Star, we were in position to launch the Self-Service Analytics project. A decentralization of data requests from BI was necessary in order to be able to scale, but we could not just tell our non-analysts “the data is there, knock yourself out”.

We had to decide if we wanted Self-Service Analysts to work on Tableau or if we could find a better solution for them. It is interesting to tell the story of how we evaluated the candidate BI tools, as there were quite a few on our list. We do not claim this is the only correct way to do this, but it’s our take, and we must admit that we’re proud of it.

We decided to create a BI Tool Evaluation tool. We had to outline the main pillars on which we would evaluate the candidate tools. We then anonymously voted on the importance of those pillars, averaging the weights and normalizing them. We finally reached a total of 9 pillars and 9 respective weights (summing up to 100%). The pillars list contain connectivity effectiveness, sharing effectiveness, graphing, exporting, among other factors.

These pillars were then analyzed in order to come up with small testing cases, using which we would assess the performance in each pillar, not forgetting to assign weights on these cases too, so that they sum up to 100% within each pillar. Long story short we came up with 80 points to assess each one of the BI tools.

We needed to be as impartial as possible on this, so we assigned two people from the BI team to evaluate all 5 tools involved. Each BI tool was also evaluated by 5 other people from within ORFIUM but outside BI, all of them potential Self-Service Analysts.

Coming up with 3 evaluations for each tool, averaging the scores, and then weighting them with the agreed weights, led us to an amazing Radar Graph.

Though there is a clear winner in almost all pillars, it performed very poorly in the last pillar, which contained Cost per user and Ease of Use/Learning Curve.

We decided to go for the blue line which was Metabase. We found out that it would serve >80% of current needs of Self-Service Analysts, with very low cost, and almost no code at all. In fact we decided (Data Governance had a say on this too) that we would not allow users to be able to write SQL queries on Metabase to create Graphs. We wanted people to go on the Snowflake UI to write SQL, as those people were few and SQL-experienced, as they usually were backend engineers.

We wanted Self Service Analysts to use the query editor, which simulates an adequate amount of SQL features, in order to avoid coding at all. If they got accustomed to using the query builder, then for the 80% of their needs they would have achieved this with no SQL, so the rest of the Self-Service Analysts (the even-less tech savvy) would be inspired to try it out too.

After ~10 months of usage (on the Self-Hosted Open Source version costing zero dollars per user per month, which translates to *calculator clicking * zero dollars total) we have almost 100 Monthly Active Users and over 80 Weekly Active users, and a vibrant community of Self-Service Analysts looking to get more value from the data. The greatest piece of news is that the Self-Service Analysts become more and more sophisticated in their questions. This is solid proof that, within the course of 10 months, they have greatly improved their own Data Analysis skills, and subsequently the effectiveness of their day-to-day tasks.

Within those (on average) 80WAUs, the majority is Product Owners, Business Analysts, Operations Analysts, etc., but there are also around five high level executives, including a member of the BoD.

Conclusion

The BI team and ORFIUM itself have evolved in the past few years. We started from Amazon Athena and Quicksight, and after a part of the journey with python by our side, we have established Snowflake, Airflow, dbt and Tableau as the BI stack, while adding in ORFIUM’s stack Select Star for Data Discovery and Metabase for Self-Sevice Analytics.

More info on these in upcoming posts, but we have more insights to share for the Self-Service Initiative, the Staff BI role, and the Data Culture at ORFIUM.

We are only eager to find out what the future holds for us, but at the moment we feel future-proof.

Thomas Antonakis

Senior Staff BI Analyst

LinkedIn

Foreword

This is the story of Business Intelligence and Analytics in Orfium, from before there was a single team member or a team itself within the company, to now, where we have a BI organization that scales, Data goals and excited plans for future projects.

Our story is a long one to tell, and one that makes us enthusiastic to tell. We’ll go through the timeline of our journey to the present day, but we fully plan to elaborate on the main points discussed today in their own articles.

We hope you’ll enjoy the ride. Buckle up, here we go!

Where we started

Before we formally introduced Business Intelligence to Orfium, there were a few BI-adjacent functions at the company. A number of Data Engineers, Operations Managers, and Finance execs created some initial insights with manual data pipelines.

These employees primarily gathered insights on two main parts of the business.

1. Operations Insights – Department and Employee Performance

To get base-level information on the performance of departments and specific employees, a crew of Data Engineers and Operations Managers with basic scripting skills came together, put together a python script, which didn’t lack bugs. The script pulled data from CSV exports from our internal software, as well as exports from AWS S3 that were provided by the DE teams, and connected them to produce a final table. All the transformations were performed within the script. No automation was initially required, as our needs were mostly for data on a monthly basis. The final table would then be loaded on Excel and analyzed through pivot tables and graphs.

This solution provided some useful insights. However, it certainly couldn’t scale along with the organization. A few problems came up along the way which could not be solved by this solution. Not the least of which, our need to have more frequent updates of daily data, to be able to view historical performance, and to join the data with important data from other sources were all reasons why Orfium seeked a bigger, smarter and more scalable solution to BI.

2. Clients Insights

Data Engineers put together a simple dashboard on Amazon QuickSight to give clients insights into the revenues we were generating for them. The data flowed from AWS S3 tables they had created, and they displayed bar charts of revenues over time, with some basic filtering. This dashboard was maintained for a couple of years but ultimately was replaced in March of 2022 with a more comprehensive solution that the BI team provided (spoiler alert: we created a BI team).

A new BI era 

In light of some of the issues mentioned above, a small team of BI Analysts was assembled to help with the increasing needs of the Operations team.

The first decision made by the BI Analysts was related to which tools to use for visualization and the ETL process. Nikos Kastrinakis, Director of Business Intelligence, had worked with Tableau previously, so he did a demo and trial with Tableau and ultimately convinced the team to use this as our visualization tool. We also use Tableau Prep as our ETL tool. The company was now storing all relevant data in AWS S3, and the Data Engineers used AWS Athena to create views that transformed the data provided into usable tables that BI could join in Tableau Prep.

During the Tableau trial, the BI team started working on the first dashboard, set to be used by the Operations department, replacing the aforementioned buggy script. We created one Dashboard to rule them all, with information on overall department performance, employee performance, and client performance. This gave users their first taste of the power of a BI team. Our goal was to answer many of their questions in one concise dashboard, complete with historical breakdowns of different types of data, and bring insights that users hadn’t seen before. The Tableau trial ended right before the Dashboard was set to launch. So of course, we purchased our first Tableau licenses for Orfium and onboarded the initial users with the launch of this Dashboard. 

It was a huge success! The Operations team was able to phase out their use of the script, stop wasting time monthly to generate reports for themselves, and were exposed to a new way of gaining insights.

Our work with the Operations team didn’t stop there. Over the next many months we continued to work with this data stack. We created further automation to bring daily data to the Operations team so they could manage departments and employees in real-time. But this introduced some new challenges we had to face. 

With the introduction of daily estimated data, the frequency of updates and the size of the views made the extracts unusable and obsolete, so we had to face the tradeoff of data freshness VS dashboard responsiveness. Most of the stakeholders were happy to wait 30 seconds more when they looked at their dashboards, knowing that they had the most up-to-date data possible. Operations needed to be more agile in their decisions and actions, so having fresh data was very important for them. To date, members of the Operations team remain the most active users of Tableau at Orfium and have been active participants in other data initiatives across the company.

The reception of these initial dashboards was amazing. The stakeholders could derive value and make smarter decisions faster, so the BI team gained confidence and trust. However, the BI team was still mainly serving the Operations department (with some requests completed for Clients and Corporate insights) but was starting to get many requests from Finance, Products, and other departments. We began to add additional BI Analysts to serve these needs. However, this was just the beginning of the creation of a larger team that could serve more customers more effectively, as we also began improving internal tech features and utilizing external solutions for ready-made software.

Where we are today

We had many questions to clear up: 

Where to store our data, how to transform them, who is responsible for these transformations, who is responsible for the ready and delivered data points, who has access and how do they get it, where do we make our analyses, how do data move around platforms and tools, how do our data customers discover our work?

Months and months of discussions between departments on all these questions lead to a series of decisions and commitments about our strategic data plans.

Where we stand now is still a transition from the previous step, as we decided to take a giant step forward, by embracing the Data Mesh Initiative. We’ll have the chance to talk about some of the terms and combinations of software that we’re about to mention in future blog posts, but we can run through the basics right now.

Our company is growing very quickly and, given the fact that we prefer being Data-(insert cliche buzzword here), the needs and requests for BI and Data analysis are growing at double the speed.

The increase in the number of BI Analysts was inevitable with the increasing requests and addition of new departments in the company that needed answers for their data questions.

By hiring more BI Analysts, we split our workforce between our two main Data customers, and thus created two BI Squads.

One is focused on finance and external client requests. We named it the Corporate squad, and it consists of a BI Manager and 2 BI Analysts. This is the team that prepares the monthly Board meeting presentation materials (P&L, Balance Sheets), and the dashboards shared with our external customers so that we can use data to demonstrate the impact of our work on their revenue and so that they get a better understanding of their performance on YouTube. This squad also undertakes many urgent ad-hoc requests on a monthly basis. This squad has a zero tolerance policy for mistakes and usually works on a monthly revision/request cycle.

The second squad is more focused on analyzing, and evaluating the performance and usage of our internal products, and connecting that info with the performance of our Operations teams, which generate the largest portion of our revenue. This squad, which happens to work from two different time zones, again consists of a BI Manager and 2 BI Analysts, and has more frequent deadlines, as new features come up very fast and need evaluation. The nature of data and continuous evolution of the data model results in less robust data.

In the meantime, we had already realized that we had to bulletproof our infrastructure and technical skills before scale gets to us. We decided to have some team members focused on delivering value by creating useful analyses, as described above, but we also reserved time and people who were more focused on paving the way for the rest of the analysts to be able to create more value, more efficiently.

We researched the community’s thoughts on this and we found the term Analytics Engineer, which seemed very close to what we were looking for. We thought this would be very important for the team and decided to go one step further and create a separate role that would be the equivalent of the Staff Engineer for software engineering teams. This role is more focused on setting the technical direction of the department, researching new technologies, consulting on the way projects should be driven, and enforcing best practices within the Analytics Chapter of the Data Unit. Quality, performance, and repeatability are the three core values that the code produced by this department should have.

We currently have a team of 8 people including the BI Director, two squads, each with one BI Manager and two BI Analysts, plus the Staff BI Analyst.

In terms of skillsets, we left python behind. Instead, we’re focusing more on writing reliable and performant SQL code and collaborating on git efficiently as a team. Our new toolkit is also more or less co-decided by the endorsement of a centralized data mesh, which is currently hosted by Snowflake. Nothing is being hosted/processed locally anymore, we develop data pipelines using SQL, apply them to our dev/staging/production environments through dbt and orchestrate the scheduling and data freshness using Airflow. We are the owners of our own Data Product, which is Orfium’s BI layer. It is a schema in Orfium’s production database where we store our fresh, quality, and documented data resulting from our processes. This set of tables connects data from other teams’ data products (internal products, external reports, data science results) and creates interoperating tables. These tables are the base for all our Tableau Dashboards, and help other teams use curated data without having to reinvent the wheel of the Orfium data model on their own again. Our Data product and our Tableau sites with all of our dashboards are fully documented and enriched with metadata so that our data discovery tool Select Star allows stakeholders to search and find all aspects of our work.

The future

Data Mesh was a big bet for Orfium and we will continue to build on it. The principles are applied and we are in the process of onboarding all departments on the initiative to be able to take advantage of the outcomes to the fullest extent. When this hard process is completed, all teams will enjoy the centralized data and the interoperability that derives from it, the Domain-Driven Data ownership, ensuring the agreed levels of quality on data, and it will help Orfium become more data-powered.

In addition to the obvious outcomes of applying the Data Mesh principles in a company, we believe that we need to follow up with two more major bets.

We decided to initiate, propose and promote Data Culture in Orfium. This is a very big project and is so deep that all employees need to get out of their comfort zones to achieve it. We need to change the way we work, to start planting data seeds very early in all our projects, products, initiatives, and working behavior so that we can eventually enjoy the results later on. This initiative will come with a Manifesto, which is being actively written and soon will be published. It will require commitment and follow-up on the principles proposed so that we achieve our vision.

Self-Service Analytics is also one of the Principles that Data Mesh is based on, and we decided to move forward emphatically with this too. Data will be generally accessible on Snowflake by everyone, but Data Analysis requires data literacy, SQL chops, and infrastructure that can host large amounts of data in an analysis. We decided to use Metabase as the proxy and facilitator for Self-Service Analytics. It provides the infrastructure by analyzing the data server-side, and not locally, and its query builder for creating questions is an excellent tool to create no-code analyses. Surely, it is not as customizable as SQL, but it will cover 85% of business users’ needs with superior usability for non-technical users.

This leaves us with data literacy and consultancy. For this, we have set up a library of best practices, examples, tutorials, and courses explaining how to handle business questions, analyses, limitations of tools, etc. At Orfium we always want to take a step further though, and we have been working to formulate a new role that will provide in-depth consultancy on data issues. This role will act as a general family doctor, who you know personally and trust, and will handle all incoming requests on data problems. Even if the data doctor cannot directly help you, they can direct you to a more suitable “doctor”, a set of more specialized experts, each one in their data sector (Infrastructure, SQL, Data Visualization, Analysis Requirement, you name it).

To infinite data and beyond

What a journey this has been over the last 3-4 years for BI in Orfium! We have gone through a lot, from not having official Business Intelligence to a BI team that has plans, adds value for the organization and inspires all teams to embrace the data-driven lifestyle. We’ve done a great job so far, and we have great plans for the future too.

It’s a long way to the top, if you want to rock and roll, ACDC :zap:

Stephen Dorn

Senior Business Intelligence Analyst

LinkedIn

Thomas Antonakis

Senior Staff BI Analyst

LinkedIn