One of the most significant contributions that technology has provided to the investment research process has been making data easily available to everyone. Fundamentally, the data of today is massive in volume and largely different from traditional data sets in its core characteristics. Over time, the availability and consumption of data have vastly improved from speed of delivery, storage, and overall control of data to the ability to build complex and computationally intensive analytics on top of it. However, what has not improved is the number of analysts trying to consume data in order to derive meaningful insights. This contradiction has resulted in a new phenomenon that we like to call “information deluge.”
Technology has made it possible to convert every human expression and its interaction with machines into accessible data-like voice and text transcripts, IoT data, etc. These data sources have also exploded in the last five years, causing investment managers to become overwhelmed with the sheer amount of choices. With so much data readily available, no one wants to be left behind, which has resulted in funds collecting multiple streams of data and creating colossal data piles in their technology ecosystem, hence causing the deluge. But collection and storage of data is only one part of the story. What will make this whole process valuable is how and who interprets this data promptly as the real value of data is not in the information it provides but in the actionable insights that can be derived.
Alert-based and intelligent curated content
As we see it, there are two main problem areas:
- It has become manually impossible to go through all of the data that is collected. For example, going through the laborious task of reading hundreds of pages within a company’s 10K or 10Q is an impossible task.
- Even if one uses machines to sift through the data, the interpretation of it still lies with analysts, which slows down decision-making. On top of this, some information still remains hidden within the process itself.
What we need is a mechanism through which curated content can be dished out to analysts in a regular pattern and in a way that can always be tracked. Curated content means adding value to the processed information and presenting it in a way in which specific actions can be taken. Forking out text differences within a company’s 10K is good, but what makes it more valuable is if someone can stitch these differences to both future and past-related news items. A single data source might not give a strong signal by itself, but it adds more weight to the message when paired with other data sources. These interwoven signals are an excellent source for any analyst to further develop their research, but they must be presented in a way that can be accessed at any point in time. If all of these signals can be stored and made searchable, users can do a historical trend analysis similar to how they are done for financial numbers with ease. For example, management sentiment trends derived from quarter-to-quarter earnings calls can be a good source for substantiating an analyst’s theory.
The investment strategy and research data
One commonality among all research management systems in the market today is the type of data sources they make available to their users. It revolves mostly around market data, company financial numbers, published news, quarterly transcripts, and other regulatory and management disclosures. But, if you are a macro fund or a long-short fund specializing in a particular sector like biotech or credit, the above list of different data types is very limiting. Therefore, most of these funds either use RMS(Research Management System) tools as note-taking and collaboration software or they turn to create a separate data ecosystem with little to no integration with RMS. In some cases, they might even end up creating their own proprietary RMS tools in-house. The first approach is more costly and does not help in integrating the research data, which is spread all around a fund.
Every investment strategy is unique and needs its own set of data sources. For example, macro funds require data sets ranging from country economic data and central banks periodic monetary policies to clinical trial phase data. Given that all of these data sets have their own intricacies and complex requirements, a generic RMS tool can never truly work.
When seeking ways to navigate the information deluge, it is essential to work with a partner that possesses the required knowledge and experience to wade through all the complexities associated with unique and complex data sources.
Click here for more stories