Soundmouse by Orfium has been chosen by the South Korean Broadcast Music Identifying System (BROMIS) as the official music reporting partner. This consortium, led by major broadcasters like KBS, MBC, SBS, and four collecting societies, including KOMCA, KOSCAP, FKMP, and KEPA, represents a significant stride forward in transparency and accuracy within South Korea’s music market. As part of the deal, Soundmouse’s industry-leading music cue sheet reporting and audio recognition fingerprinting technology will now be utilized by 36 broadcasters across 175 TV channels and radio stations in South Korea.

Supported by the South Korean Ministry of Culture, Sports and Tourism and the Korean Copyright Commission, the three-year agreement promises to enhance transparency in music reporting processes and ensure fairer royalty payments to creators and rights holders. 

This partnership marks a significant milestone for us at Orfium, as Soundmouse by Orfium is one of the first third-party companies granted access to the Korean Music Database, allowing us to deliver seamless matching of reported music against their vast repository of 17.3 million Korean music tracks.

Steve Choi, Secretary General of BROMIS, said the partnership is a game-changer for the music industry in South Korea, stating that “The sharing of clear, transparent, and granular data from a neutral source represents a significant step towards making the music industry a more equitable environment. With the consistent accuracy and reliability of their reporting, Soundmouse by Orfium stood out as the best partner for the project. The quality and accuracy of their reporting processes will be game changing for our industry and will have a positive impact on the remuneration of creators and rights holders as well as the development of our wider industry ecosystem.”

The impact of this collaboration extends beyond mere technological advancements though. South Korean collecting societies will now be able to leverage Soundmouse by Orfium’s reports to inform royalty distributions to their members, including songwriters, recording artists, phonogram producers, and rights holders. By seamlessly meeting cue sheet requirements and streamlining reporting processes, broadcasters will be able to engage in more efficient negotiations with collecting societies, fostering a healthier music ecosystem.

Rob Wells, CEO of Orfium, said, “To be selected for such a significant project underscores the quality of our technology, team, and our commitment to the music industry. We are thrilled to expand our presence in the Asian market, supporting local creators and rights holders while enhancing the transparency and accuracy of music reporting processes.”

Bonna Choi, Head of Soundmouse by Orfium Korea mirrored this sentiment, “After such a rigorous consultation and trial process, we are excited to have been selected and accredited by BROMIS to work on behalf of creators, rights holders, broadcasters and collecting societies. We will bring extensive industry experience from an expert team, the highest standards in cue sheet reporting, and the most advanced technology in audio recognition fingerprinting to strengthen the process of music reporting in such an important market.”

This exciting new deal, officially approved in January 2024, marks another significant step in Orfium’s expansion in Asia. Following successful partnerships with entities like Avex, Bandai Namco, and the Japanese Society for Rights of Authors, Composers and Publishers (JASRAC), Orfium continues to lead the charge in revolutionizing music reporting and rights management across the continent.

Be a part of the journey with Orfium in Asia

If you’re interested in our journey and want to explore opportunities with Orfium, get in touch here.

🧾Introduction

One of the latest initiatives undertaken by the Rights Management team, was the migration of a legacy Django application to a more maintainable and scalable platform. The legacy application was reaching its limits and there was a real risk of missing revenue because the system could not handle any more clients.

To address this challenge, we developed a new platform that could follow the “deep pocket strategy.”

What does that mean for our team? Essentially, when the number of transactions we process spikes, we deploy as many resources needed in order to keep our SLAs.

With the end goal of building a platform capable of scaling, we decided to go with native cloud architecture and AsyncIO technologies. The first stress tests surprised (and baffled!) us,  because the system didn’t perform as expected even for a relatively small number of transactions per hour.

This article covers our journey towards understanding the nuances of our changing architecture and the valuable lessons learned throughout the process. In the end (spoiler alert!) the solution was surprisingly simple, even though it took us on a profound learning curve. We hope our story will provide valuable insights for newcomers venturing into the AsyncIO world!

📖Background

Orfium’s Catalog Synchronization service uses our in-house built application called Rights Cloud. It’s tasked with managing music catalogs. Additionally, there’s another system, called Rights Manager, which handles synchronizing many individual catalogs with a user-generated content platforms (UGC,) such asYouTube.

Every day, the Rights Manager system ingests Rights Cloud messages, which detail changes regarding a specific client’s composition metadata and ownership shares. For this, we had implemented the Fan-Out Pattern, so Rights Cloud sends the message to a dedicated topic (SNS) and then Rights Manager copies the message to its own queue (SQS). For every message that Rights Manager ingests successfully two things happen:

  1. The Rights Cloud composition is updated to the Rights Manager System. The responsible API is the Import Rights Cloud Composition
  2. After the successful Rights Cloud ingestion, a new action is created (delivery, update, relinquish). The API responsible for handling these actions is the Identify Actions API.

Based on this ingestion, the Rights Manager system then creates synchronization actions, such as new deliveries, update ownership shares, or relinquish.

Simplified Cloud Diagram of Rights Manager

😵The Problem

During our summer sprint, we rolled out a major release for our Rights Manager platform. After the release, we noticed some delays for the aforementioned APIs.

We noticed (see the diagrams below) spikes in latency between 10 to 30 seconds for the aforementioned APIs during the last two weeks of August.Even worse, some requests were dropped because of the 30-second web server timeout. Overall, the P90 latency was over 2 seconds. This was alarming, especially considering that the team had designed the system to meet a maximum latency performance metric of under 1 second.

We immediately started investigating this behavior because scalability is one of the critical aspects of the new platform.

First we examined the overall requests for the upper half of August. 

At first glance, two metrics caught our attention:

  1. Alarming metric #1: The system could not  handle more than ~78K requests in two weeks although the system was designed to serve up to 1 million requests per day.
  2. Alarming metric #2: The actions/identify API was called 40K more times than import/rights_cloud API. From a business perspective, the actions/identify should have been called at most as many times as the import/rights_cloud. This indication showed us that the actions/identify was failing, leading Rights Manager to constantly re-try the requests.
Screenshot 2023-09-25 091406.png

Total number of requests between 16-31 of August.

Based on the diagrams, the system was so slow in responses that we had the impression that the system was under siege!

The following sections describe how we solved the performance issue.

Spoiler alert: not a single line of code changed!

📐Understand the bottlenecks – Dive into metrics

We started digging into the Datadog log traces. We noticed that there were two patterns associated with the slow requests.

Dissecting Request Pattern #1

First, we examined the slowest request (32.1 seconds). We discovered that sending a message to an SQS took 31.7 seconds, a duration far too long for our liking. For context, the SQS is managed by AWS and it was hard to believe that a seemingly straightforward service would need  30 seconds to reply under any load.

Examining slow request #1: Sending a message to an AWS SQS took 31.7 seconds

Dissecting Request Pattern #2

We examined another slow request (15.9 seconds) and the results were completely different. This time, we discovered a slow response from the database. The API call needed ~3 seconds to connect to the database and a SELECT statement on the Compositions table needed ~4 seconds. This troubled us because the SELECT query uses indexes and cannot be further optimized. Additionally,  3 seconds to obtain a database connection is really a lot.

Examining a slow request #2: The database connect took 2.98 seconds and an optimized SQL-select statement took 7.13 seconds.

Examining a slow request #2: Dissecting the request we found that the INSERT statement was way more efficient than the SELECT statement. Also, the database connect and select took around 64% of the total request time.

Dive further into the database metrics

Based on the previous results, we started digging further into the infrastructure components.

Unfortunately, the Amazon metrics for the SQS didn’t provide us with the insight needed to understand why it was taking 30 seconds to publish a message into the queue..

So, we shifted our focus to the database metrics. Below is the diagram from AWS Database monitor.

The diagram showed us that no load or latency existed on the database. The maximum load of the database was around 30%, which is relatively small. So the database connection should have been instant.

Our next move was to see if there were any slow queries. The below diagram shows the most “expensive” queries.

Once again, the result surprised us – No database load or slow query was detected from the AWS Performance Insights.

The most resource-intensive query was the intervacuum which took 0.18 of the total load, which is perfectly normal. The maximum average latency was 254 ms regarding an INSERT statement, once again reflecting perfectly normal behavior.Screenshot-2023-09-04-140210_final_2.png

The most expensive queries that are contributing to DB load

According to AWS documentation by default, the Top SQL tab shows the 25 queries that are contributing the most to database load. To help tune your queries, the developer can analyze information such as the query text and SQL statistics.

So at that moment, we realized that the database metrics from Datadog and AWS Performance Insights were different.

The metric that solved the mystery

We suspected that something was amiss with the metrics, so we dug deeper into the system’s status when the delays cropped up. Eventually, we pinpointed a pattern: the delays consistently occurred at the start of a request batch. But here’s the twist – as time went on, the system seemed to bounce back and the delays started to taper off.

The below diagram shows that when the Rights Cloud started to send a new batch of messages around 10:07 am, the Rights Manager APIs needed more than 10 seconds to process the message in some cases.

After a while, at around 10:10 am,  there was a drop in the P90 from 10 seconds to 5 seconds. Then, by 10:15 am, the P90 plummeted further to just 1 second.

Something peculiar was afoot – instead of the usual expectation of system performance degrading over time due to heavy loads, our system was actually recovering for the same message load. 

At this point, we decided to take a snapshot of the system load. And there it was – we finally made the connection!

Eureka! The delays vanishes when the number of ECS (FastAPI) instances increases.

We noticed that there was a direct connection between the number of API requests and the number of ECS instances. Once the one and only ECS instance could not serve the requests,  the auto scale kicked in and new ECS instances were spawned. Every ECS instance needs around 3 minutes to go live. When the instances were live, the delays decreased dramatically.

We backed up our conclusion by creating a new Datadog Heatmap. The below diagram explains the aggregation of the duration of each Import Rights Cloud Composition. It is clear, that when the 2 new FastAPI instances were spawned at 10:10 am, the delays were decreased from 10 seconds to 3 seconds.  At 10:15 am there were 5 FastAPI instances and the responses dropped to 2 seconds. Around 10:30 the system spawned 10 instances and all the responses duration was around 500ms.

At the same time, the database load was stable and between 20-30%.

That was the lightbulb moment when we realized that the delays weren’t actually database or SQS related. It was the async architecture that caused the significant delays! When the ECS instance was operating at 100% and the function was getting data from the database, the SELECT query duration was in milliseconds but the function was (a)waiting for the async scheduler to resume. In reality, the function has nothing else to do but to return the results.

This explains why some functions took so much time and the metrics didn’t make any sense. Despite the SQS responding in milliseconds, the functions were taking 30 seconds because there simply wasn’t enough CPU capacity to resume their execution.

The spawn of new instances completely resolved the problem because there was always enough CPU for the async operation.

🪠Root cause

Async web servers in Python excel in performance thanks to their non-blocking request handling. This unique approach enables them to seamlessly manage incoming requests, accommodating as many as the host’s resources allow. However, unlike their synchronous counterparts, async servers lack the capability to reject new requests. Consequently, in scenarios of sustained incoming connections, the system may deplete all available resources. Although it’s possible to set a specific limit on maximum requests, determining the most efficient threshold often requires trial and error.

To broaden our understanding, we delved into the async literature and came across the Cooperative Multitasking in CircuitPython with asyncio.

Cooperative multitasking involves a programming style where multiple tasks alternate running. Each task continues until it either encounters a waiting condition or determines it has run for a sufficient duration, allowing another task to take its turn..

In cooperative multitasking, each task has to decide when to let other tasks take over, which is why it’s called “cooperative.” So, if a task isn’t managed well, it could hog all the resources. This is different from preemptive multitasking, where tasks get interrupted without asking to let other tasks run. Threads and processes are examples of preemptive multitasking.

Cooperative multitasking doesn’t mean that two tasks run at the same time in parallel. Instead, they run concurrently, meaning their executions are interleaved. This means that more than one task can be active at any given time.

In cooperative multitasking, tasks are managed by a scheduler. Only one task runs at a time. When a task decides to wait and gives up control, the scheduler picks another task that’s ready to go. It’s fair in its selection, giving every ready task a shot at running. The scheduler basically runs an event loop, repeating this process again and again for all the tasks assigned to it.

🪛Solution Approach #1

Now that we’ve identified the root cause, we’ve taken the necessary steps to address it effectively.

  1. Implement a more aggressive scale-out. Originally set at 50%, we’ve now set the threshold to 30% to facilitate a more responsive approach to scaling. With Amazon’s requirement of 3 minutes to spawn a new instance, the previous 1-minute timeframe for CPU reaching 100% left a mere 2-minute window of strain. 
  2. Implement a more defensive scale in. The rule of terminating ECS instances is CPU-based. By establishing a threshold of 40% over a 2-minute interval, instances operating below 20% for the same duration trigger termination by the auto-scaler.”.
  3. Change the load balancer algorithm. Initially using a round-robin strategy, we encountered instances where CPU usage varied significantly, with some reaching 100% while others remained at 60%. To address this, we’ve transitioned to the “least_outstanding_requests” algorithm. This ensures that the load balancer directs requests to instances with the lowest CPU usage, optimizing performance and resource utilization across the system.

By aggressively scaling out additional ECS instances, we’ve now successfully maintained a P99 latency of under 1 second, except from the first minute of the API flood request.

🪛🪛Solution Approach #2

While the initial approach yielded results, we recognized opportunities for improvement. We proceeded with the following strategy:

  1. Maintain two instances at all times, allowing the system to accommodate sudden surges in API calls. 
  2. We’ve capped the maximum ECS instances at 50. 
  3. Implementing fine-grain logging in Datadog, to differentiate between database call durations and scheduler resumption times effectively.

🏫Lessons learned

  1. Understanding the implications of async architecture is crucial when monitoring system performance. During times of heavy load, processes can pause without consuming resources—a key benefit to note.
  2. In contrast to a non-preemptive scheduler, which switches tasks only when necessary conditions are met, preemptive scheduling allows for task interruption during execution to prioritize other tasks. Typically, processes and threads are managed by the operating system using a preemptive scheduler. Leveraging preemptive scheduling offers a promising solution to address our current challenges effectively.
  3. We realized that while the vCPU instances are cheap, the processing power is relatively low.
  4. The ECS processing power may differ between instances, while the EC2 instances provide a stable processing power.
  5. A FastAPI server hosted on 1 vCPU instance can handle 10K requests before maxing out CPU consumption.
  6. The disparity in cloud costs between completing a task in 10 hours with 1 vCPU or in 1 hour with 10 vCPUs is evident. However, from a business perspective, the substantial difference lies in completing the job 90% faster.
  7. A r6g.large instance typically supports approximately 1000 database connections.

🧾Conclusion

Prior familiarity with async architecture would have streamlined our approach from the start but this investigation and the results proved to be more than rewarding!

AsyncIO web servers demonstrate exceptional performance, efficiently handling concurrent requests.. However, when the system is under heavy load, the async can deplete the CPU/Memory resources in the blink of an eye! —a common challenge readily addressed by serverless architecture. With processing costs as low as €30/month for 1 vCPU & 2GB memory Fargate, implementing a proactive scaling-out strategy perfectly aligns with business objectives.

Orfium has received confirmation that all of our global operating entities have achieved ISO 27001:2022 standard certification for Information Security Management Systems (ISMS).

The accreditation is an important marker for us at Orfium and our client base, providing important assurance on the robustness of Orfium’s information security policies.   

Introduced in 2005 by the International Organization for Standardization and the International Electrotechnical Commission, ISO/IEC 27001 stands as an international benchmark for effective information security management.

This standard offers comprehensive guidance for the establishment, implementation, maintenance and continual improvement of an Information Security Management System (ISMS), outlining the essential requirements that an ISMS must meet.

The adoption of the ISO/IEC 27001 standard certification brings Orfium several benefits, including:

  1. Risk Management: Identification of information security risks to mitigate vulnerability to cyber-attacks. Preparation of people, processes, and technology across the organization to address potential risks.
  1. Enhanced Security Measures: Promotion of robust security controls and measures.
  1. Compliance and Legal Alignment: Support in meeting regulatory requirements related to information security, critical for sensitive data such as financial statements, intellectual property information, or employee data.
  1. Business Continuity Enhancement: Establishment of protocols for incident management.
  1. Continual Improvement: Regular reviews and enhancements to the ISMS, fostering a culture of ongoing improvement in security practices.

Having ISO 27001 compliance is an important milestone for Orfium. It assures all our clients that we have robust information security policies in place for all of Orfium’s global operating companies.

Our commitment to customers worldwide is to guarantee that the data they share with us is safe and that we will continually evolve our processes to ensure ongoing compliance with international security practices.

Michael Petychakis, CTO at Orfium

About the IISO/IEC 27001 standard

ISO/IEC 27001 is the world’s best-known standard for information security management systems (ISMS).

The ISO/IEC 27001 standard provides companies of any size and from all sectors of activity with guidance for establishing, implementing, maintaining and continually improving an information security management system.

Conformity with ISO/IEC 27001 means that an organization or business has put in place a system to manage risks related to the security of data owned or handled by the company, and that this system respects all the best practices and principles enshrined in this International Standard.

For more information visit: https://www.iso.org/standard/27001 

In this blog post, we’ll share expert tips on how to grow your catalog revenue on TikTok, YouTube, and Instagram/Facebook. Whether you’re a seasoned rights owner or just starting out, we’ll guide you on how to build an effective user-generated content (UGC) strategy.

If you have questions after reading this article or are looking to gett more revenue out of your music on TikTok, YouTube or Meta (Instagram and Facebook), we invite you to get in touch with our team to learn how you could be generating more revenue across UGC platforms. 

1. Upload your music to UGC platforms

To tap into the exponential growth of UGC and growing revenue potential, you as music rights owners first need to upload your music to social media platforms for users to engage with. 

There are three ways you can make your music accessible to creators on UGC platforms.

  1. Make your music available through a distributor
  2. Gain access to each platform’s Content Management System
  3. Manage your music through a Rights Management Company

1. Make your music available through a distributor

Whether you’re an individual artist or a more established label, distributors can help place your music on popular platforms like Spotify, Apple Music, and sometimes UGC platforms too. This mass market option is a viable alternative for those who either currently aren’t in a position to sign with Rights Management Companies like Orfium, or aren’t affiliated with a label.

Not all distributors have partnerships with UGC platforms though, and if you do have one you may have to ask if they can handle the uploading process for you. Distributors who do UGC distribution often collect revenue for rights owners, so you’ll rely on them to take care of the tasks listed in the next section.

2. Gain access to each social platform’s Content Management System (CMS)

What is a CMS?

Content Management Systems (CMS) are essentially a personal dashboard to the backend of social media platforms. They offer a basic level of usage monitoring, revenue tracking and a place to manage potential copyright infringements. Each platform has its own CMS for rights owners: TikTok has MediaMatch, YouTube has ContentID and Meta has RightsManager. These systems enable rights owners to upload metadata and tracks to their platforms – from sound recordings to compositions.

Each CMS will require rights owners to register and wait for approval from the platform. This can be a tricky process. For all of them, the size of your catalog is a factor. Currently, only the largest rights owners with the most extensive catalogs of music have CMS access. For rights owners who don’t have CMS access, you can go through a third party that already has CMS access – namely a distributor or Rights Management Company.

3. Manage your music through a Rights Management Company

So how does going through a Rights Management Company work? You’ll first need to get your catalog into a music library, such as Extreme Music or 5 Alarm. Then, the music library needs to bring on a Rights Management Company and agree to have them administer the library’s assets on UGC platforms for revenue claiming.

In this scenario, Rights Management Companies like Orfium play a crucial role. They collaborate directly with labels, publishers, production music companies and established rights holders to develop customized services and strategies tailored specifically for UGC platforms. These strategies can include metadata cleanups and handling, asset uploading, revenue collection, usage finding and tracking and everything else in the following sections.

2. Optimize your metadata to boost usage and revenue from UGC platforms

The discoverability and monetization of your music on UGC platforms are directly linked. Accurate and comprehensive metadata is key to boosting discoverability, usage, and revenue. It’s crucial to prioritize organizing and perfecting your metadata before uploading your music. This proactive approach increases the chances of your music being found by users and the use of your music being properly attributed, leading to more monetization opportunities.

What is music metadata?

Music metadata can be considered the digital DNA of a song file. It usually lives as a spreadsheet of data that includes vital information like artist name, song title, album, copyright information, writer names, ISRCs, and territory ownership amongst other details. All of this metadata is the code that allows UGC platforms to understand and categorize music and ownership effectively.

Best practices for handling metadata

To optimize your music’s presence on UGC platforms, you’ll likely need to carry out a comprehensive ‘clean up’ of your metadata. Here are some best practices to consider when sorting and perfecting your metadata:

  1. Pay Attention to Details: Thoroughly review and double-check your metadata for accuracy and completeness. Uniform spacing and capitalization are two things to watch out for which often trip up the upload process.
  2. Provide High-Quality Reference Files: When uploading sound recordings, ensure you have high-quality MP3 or WAV files. These reference files are used by automatic detection systems employed by UGC platforms to identify the usage of your music in videos. The higher the quality, the better the detection.
  3. Unique Reference Files: While detection systems like Content ID (YouTube’s CMS) are effective, they are not perfect. To avoid unnecessary claim and reference overlaps, make sure your reference files are as unique as possible. Be mindful of arrangements and the use of samples when uploading your music.
  4. Research Platform Requirements: Different UGC platforms have varying requirements for metadata and reference files. Research and understand these requirements for each platform to streamline the ingestion process.
  5. Choose a Reputable Rights Management Company: If you opt for a rights management company to handle UGC for you, choose a reputable and trustworthy partner – if it’s not done right, you’ll risk missing out on potential revenue without even knowing it’s lost. Choose a company that is trusted, has proven results, and is happy to put you in touch with existing clients who can provide recommendations.

3. Actively manage your music on UGC platforms

When your music is on one or multiple UGC platforms, keep a close eye on the associated admin tasks to make sure they’re delivering revenue to their full potential. The frequency of these tasks could range from daily to monthly depending on the size and popularity of your catalog. Managing these tasks promptly and thoroughly is crucial to making sure you’re tracking and monetizing accurately, and can be done within the CMS of each platform. 

These tasks can all be handled by your own in-house person or team, by your distributor or by a Rights Management Company.

How and where do I manage my music on TikTok, YouTube or Meta?

Within the CMS lives your assets which will need to be uploaded and managed. 

What this means for rights holders is that you can have control over how your assets are used. 

For example, a route for review to monetize policy can come in handy for production music companies who are licensing music for use on UGC platforms, giving them the ability to approve or deny any automatic claim for monetization before the claim is placed.

To make sure your catalog is being managed in a timely way, you should allocate a team or outsource the responsibility to a Rights Management company. For the majority of rights owners who will not be granted CMS access, this is the best route toward effective and accurate monetization.

How to manage conflicts, disputes & reference overlaps for music on TikTok, YouTube or Meta

The nature of UGC means that vast volumes of creation every minute of every day means that conflicts, disputes and reference overlaps happen very often. But life would not be life without hiccups in the revenue-claiming process. As annoying as these issues can be, each CMS has avenues to solve them. 

Conflicts: This is when two or more parties may be claiming ownership of any given asset. While assets are in conflict any money generated from claims is escrowed and only paid out to the remaining party holding ownership when the conflict is resolved in the form of backpay. 

Ownership conflicts occur when two or more parties place more than 100% ownership on any given asset. When an asset has over 100% ownership between different parties, the conflict will cause all money generated by copyright claims to go into escrow. YouTube will hold this money until the ownership conflict is resolved meaning the percentage of ownership goes back to 100% between all parties. YouTube’s Content Management System allows users to contact third parties to resolve any conflict issues. It is important to stay on top of conflicts as this can be a blocker for paying out the proper earnings on each asset. 

Disputes: When a copyright claim is placed on a video, the uploader of the video has the right to dispute the claim. Once a dispute is placed, the copyright claim is set to inactive. The CMS gives you the ability to accept or reject the dispute by reinstating the claim or releasing the claim.

Reference Overlaps: If two parties ingest the same sound recording this can result in a reference overlap. This can be a portion of the reference or the entire reference. If a reference overlap occurs, you can action the issue by asserting ownership of the sections that overlap or you can exclude the portions that overlap from your reference. 

Copyright claims: When an uploader publishes video content containing copyright material without permission or licensing, they may receive a copyright claim that allows copyright owners to take actions on the material they own. 

The copyright claim will use the asset and the associated policy that has been chosen by the rights holder to track, monetize, block, etc. 

This is not to be confused with a removal request or a strike. While rights holders have the right to submit a removal request or a strike, a copyright claim does not necessarily mean the video will receive a removal request or a strike. 

If you’re going through a distributor or Rights Management Company, be sure to ask them how frequently these issues will be monitored and tackled by the team handling your catalog. Time is of the essence in all aspects of revenue claiming on UGC. Not all platforms recognize revenues prior to approved claims – so you’ll want to be sure that all possible bumps in the claiming process are being actively handled.

What is manual claiming?

ContentID, MediaMatch, and RightsManager are automated systems that scan user-generated content (UGC) for music usage and generate claims for rights holders. However, these systems are not perfect and struggle to identify deviations from the original source material. As a result, covers, remixes, and fan-recorded live performances (or UGC) often go undetected due to differences in instrumentals or lyrics compared to the original files uploaded by rights owners. This is where manual claiming is crucial.

Manual claiming presents a significant opportunity for Production Music companies, as licensed content is being used and shared on the platform without proper acknowledgment or monetization.

Failure to consistently identify usages and provide accurate timestamps will result in missed revenue opportunities. This is especially critical on TikTok, where revenue is only delivered after a claim has been placed. Historical usage of your tracks before placing a claim will not be considered.

If your Distributor or Rights Management Company is handling manual claims on your behalf, they are likely to employ a fourth-party service with technology or manpower to search for usages not caught by automatic usage trackers. 

Talk to an expert

Interested in working with a third party to manage your music rights across UGC platforms? Orfium works with top Production Music Companies across the world to manage and boost their catalog revenue on UGC platforms. To learn more, get in touch with our team experts.

About Wise Music 

Wise Music Group is an international conglomerate of wholly-owned companies and is home to some of the world’s leading independent classical music publishing houses, as well as a number of pop music catalogs. In 2023, they celebrated their golden jubilee; over these 50 years, they have built and acquired over sixty international publishing houses and more than thirty notable imprints, many of which are iconic household names such as Thelonious Monk, David Bowie and The Zombies.

With their extensive repertoire of both commercial and classical music, including a wealth of original samples, Wise Music had previously faced challenges in fully capitalizing on its vast catalog of works due to catalog conflicts and untracked or unclaimed usage. 

In January 2021, Wise Music partnered with Orfium, making a crucial step towards more effectively managing their works on YouTube, streamlining their catalogs and boosting revenues for their artists and songwriters. 

Orfium’s Solution 

Orfium undertook the full management of Wise Music’s YouTube account in the United States to ensure they are optimizing it and maximizing their revenue potential. The services provided by Orfium include:

Results 

Wise Music achieved an incredible feat by more than doubling its YouTube revenue in just one year of partnering with Orfium. They have also accomplished 588k recordings matched in just 32 months and resolved 654K conflicts while updating their outstanding catalogs.

We’ve been very impressed with Orfium’s team and technology. They are experts in their field and are constantly working to increase the value of YouTube revenues for our business in the United States. We feel confident knowing they’re managing our CMS to ensure all of our songwriters and artists are getting the most value from their work.

David Holley, CEO, Wise Music Group

Could your music catalog be generating more revenue across UGC platforms? Talk to Orfium today to find out.

Talk to Orfium Today

The ORFIUM Group of Companies (“Group”) prioritises the conduct of its business in a responsible and lawful manner.    

In this context, our Group encourages a corporate speak up culture in its workplace and ensures that its employees and other reporting persons feel safe sharing their concerns and report misconduct about potential violations of European Union legislation.

To this end, we have established and put into operation an internal reporting channel (“Channel”) through the present safe online platform for the submission of reports regarding the violation of EU law in the following areas:

  1. public procurement; 
  2. financial services, products and markets, and prevention of money laundering and terrorist financing; 
  3. product safety and compliance; 
  4. transport safety; 
  5. protection of the environment; 
  6. radiation protection and nuclear safety; 
  7. food and feed safety, animal health and welfare; (viii) public health; (ix) consumer protection; 
  8. protection of privacy and personal data, and security of network and information systems; 
  9. competition and state aid rules; and 
  10. other sectors of law falling within the material scope of the Whistleblowing Directive, as transposed in national legislation.

Internal reports may be submitted by Orfium employees, agents, trainees, volunteers, contractors, subcontractors, suppliers, persons working for Orfium through third-party suppliers and persons belonging to the administrative, management or supervisory body of Orfium or the Group and to any persons who acquire information through their work-related activities with the Group.

Internal reports may be submitted by electronic means through our online platform available in this webpage. Alternatively, internal reports may be submitted orally through a personal meeting with the Person designated to handle the Receipt and Follow-up of Reports (“Designated Person”) within a reasonable time, at the request of the reporting person, sent at the following e-mail address: tellme@orfium.com.

Our online whistleblowing platform is hosted in our secure servers. Any report you make will be kept strictly confidential and will not be disclosed to any third party other than the Orfium Designated Person and authorised staff members responsible for receiving, or following up on, reports.

You may submit reports both by name and anonymously. In the case of reports by name, we will pseudonymize any personal data being processed.

The operation of the Orfium Whistleblowing Channel as well as the receipt, monitoring, management, follow-up and archiving of reports are further specified in detail by the specific terms of the Group’s Whistleblowing Policy, as in force from time to time, which is available at the following link: (“Policy”).

Before submitting a report you are required to read carefully and understand our Orfium Whistleblowing Notice, which provides in detail information (i) about the operation of the Channel, the procedures for following up on reports and their rights; and (ii) about the processing of personal data during the operation of the Channel and the management of reports.

Orfium Whistleblowing Notice

Whistleblowing Policy Form

Building upon Orfium’s recent partnerships with Japanese entertainment giants Avex and Bandai Namco, we are thrilled to announce our latest collaboration with JASRAC, Japan’s largest collective management organization (CMO). Through this exciting venture, our AI-powered technology services will enhance YouTube revenues for JASRAC’s extensive network of over 20,000 members, including talented songwriters, composers, and publishers.

Japanese music is enjoying a global surge in popularity, thanks in part to its remarkable growth on streaming and user-generated content (UGC) platforms. As a result, JASRAC’s members catalogs are gaining unprecedented exposure and appreciation worldwide. In recent years, the Japanese Performing Rights Organization (PRO) has been actively exploring avenues to improve remuneration for its valuable community.

JASRAC’s collaborative efforts with the Music Publishers Association of Japan

In recent years, JASRAC has been actively seeking ways to improve financial returns for its community amidst the growth of Japanese music. In collaboration with the Music Publishers Association of Japan, it has been involved in investigating fingerprint technology since 2017. This endeavor led to the implementation of music recognition devices in retail outlets in 2021.

Orfium’s support for UGC revenue streams

Global recognition of JASRAC’s catalog, combined with Orfium’s expertise has the potential to significantly enhance revenue for JASRAC’s members. 

By enabling revenue claiming across UGC platforms, we strive to ensure that Japanese creators and rights holders can fully capitalize on this new revenue stream and maximize the success of their music on a global scale.

Be a part of the journey with Orfium Japan

If you’re interested in our journey and want to explore opportunities with Orfium,  get in touch here.

ORFIUM has adopted the Information Security Policy and is committed to the effective implementation and provision of resources for the improvement of the Information Security Management System (ISMS)