NECPUC 2017: Recap

Last month, I was honored to be a panelist at the New England Conference of Public Utilities Commissioners (NECPUC). Thanks to our moderator, Massachusetts Commissioner Karen Charles Peterson, we had a great discussion about the opportunities and challenges facing artificial intelligence (AI) applications in regulated industries, like energy and telecommunications.

By Colin Gounden

Often AI is thought of as science fiction. But, the truth is, AI is already pervasive (well beyond speech recognition, like Apple’s Siri and Netflix’s recommendation system). In fact, you likely already use some kind of AI every day whether you are aware of it or not.

Beyond the consumer level, high-stakes organizations in regulated industries are increasingly incorporating AI into regular operations. For example, a utility company that wants to plan long-term investments in their critical infrastructure may use AI applications to determine which assets are most vulnerable.

Critical decisions, like those that affect the lives of a utility’s customers, require trust of the AI application’s predictions and recommendations. Building this trust and credibility is perhaps the biggest hurdle facing AI developers as organizations incorporate these new technologies into their operations. I recommend that developers start by prioritizing client data security. We know that AI requires data to work, but organizations are often reluctant to share their proprietary information. Providing a secure, efficient way to transfer data that serves the needs of both developers and clients is absolutely essential: enter blockchain. Blockchain has the potential to change the landscape in ways we are only starting to imagine – similar to the internet’s impact on daily life. Its ability to securely share and store vulnerable information makes it a significant innovation for the continued development of AI.

Transparency, or the “why” behind AI predictions and recommendations is another important component of building trust. Via Science works specifically to address this concern with its explainable AI systems. Most deep learning has an input and an output, but the in-between steps are a mystery. One of our key differentiators is to make the logic behind AI recommendations more transparent to people so that companies and the regulators that oversee the industry have greater trust in the decisions being made.

While developers can secure client data and provide transparency, utilities and other high-stakes, regulated organizations may still be hesitant about integrating AI. Pilots are the best way for these organizations to familiarize themselves with the potential impact of AI applications and value its potential. In fact, pilots are a growing trend across utilities in particular, as more and more consider turning to AI for modernization initiatives.

To view my full presentation from NECPUC, please visit their website.

Core Philosophy: Client Service is Essential to Successful AI Applications

Update, May 2018: VIA is now hiring for a Client Service Lead to join its Somerville-based commercial team. You can apply by emailing your resume and cover letter to jobs@viascience.com.

Last month, we welcomed Kristen Merrill, our new vice president, client service to the Via Science team. Kristen joining is the perfect opportunity to share our perspective on the importance of client service and how it applies to effectively implementing artificial intelligence (AI) initiatives (like machine learning applications).

By Via Science Marketing

Any software tool could be rendered useless without intuitive, user-friendly design and support. Via Science’s chief operating officer, Kate Ravanis, illustrates this point with an example of when our office implemented a new phone system. “Everyone knows how to use a phone, but since each system is slightly different, getting a new one means taking time to learn how to set up voicemail, add extensions, and use other functionalities. In theory, this is not hard to do. But, even taking five minutes away from your day-to-day focus to learn something new can become a source of frustration or a reason to delay adoption. In our case, the phone company sent a representative to walk us through the set-up. Of course, we could have read the manual and figured it out on our own, but having a client service representative saved us from having to even think about set-up, ultimately minimizing time lost to integrate the phones and creating a much more pleasant experience.”

Whether we’re talking about office phones or AI applications, Via Science’s core philosophy is that all new technology is adopted faster by organizations and is more effective when it comes with a strong client service wrapper. We’ve seen large organizations spend upwards of $20M on data collection efforts, only to find they lack the resources and processes to properly utilize that data. Via Science works to avoid outcomes like this; a dedicated client service expert is provided on every client engagement to make our clients’ lives easier and our applications as impactful as possible. Using some of our work in the energy industry, here are three examples of how client service is essential to building successful applications:

A client service expert helps to frame the right problem, identifies appropriate key performance indicators (KPI), and aligns stakeholders across departments.

Via Science invests its efforts in answering the right questions. Rather than simply pointing out interesting trends in data, we first determine exactly what our clients need to influence. For example, let’s say a utility company wants to reduce theft of service. Our client service expert would work to identify how this is currently done internally: what departments are involved, what data do they currently use, and what are their existing processes to help identify and reduce energy theft. Understanding these elements helps us work with our clients to vet the importance of this issue across the organization and frame the problem in a way that clearly illustrates how AI and more specifically machine learning can influence (and compare against) existing operations.

Once the problem and KPI are identified, our client service expert ensures all stakeholders in the organization are aligned around the goal and determines the roles and input needed from each team.

A client service expert determines how we will define success in both the short and long term (as opposed to waiting months or years to evaluate the outcomes of various initiatives).

Let’s return to the utility company example. Before the company can reduce theft of service, they first have to identify where it is occurring. Once they know where and how, they need to put a plan in place to reduce energy theft in the future. However, the reduction of energy theft as a benchmark for the success of machine learning applications means it could take several months to determine whether machine learning had a positive or negative impact. Even then, there are so many other factors that go into reducing energy theft (what policy changes were put in place, equipment used, budget to focus resources on theft, etc.) that separating out the specific impact machine learning made to the reduction of energy theft is challenging.

In an effort to evaluate and integrate applications more effectively, a client service expert will identify the internal metrics already in place that support the identified KPI, or as Via Science refers to them: CPM™ (Corresponding Pilot Metrics). CPM™ provide a clear way to benchmark how new machine learning applications compare to current processes and help quickly assess the potential influence using them could have on the client’s existing environment. For the utility company example, CPM™ would be the time required to develop algorithms that identify theft of service, the accuracy of theft of service instances identified, or the ease with which specific theft of service alerts can be reviewed and assessed.

A client service expert focuses on how clients will use the application in their current environment. The expert relays this information to Via Science’s development team to ensure the final product we build can be seamlessly integrated into the client’s existing workflow.

In order to make our clients’ lives easier (and provide them the most value) we strive to create applications that fit into their current operations. As we mentioned earlier, client service experts seek first to understand how the identified problem is currently solved within the organization (which teams, systems, and data are currently used). Awareness around these elements is critical to ensuring we build an application that can easily be adopted and integrated into existing workflows. For example, we would look to build a first version of the application using data that is already centralized or potentially already used for detecting theft of service purposes. This will ensure faster adoption and easier integration than if we created a system that requires new data be collected across multiple stakeholders (who may not have been involved before).

We understand that needs and expectations can change throughout the course of engagements as we learn more about the problem. Our client service team works to proactively seek out feedback (specific requests, unexpected events, changes of focus) to continue refining the application to better meet client needs.

There is no shortage of content boasting the potential of artificial intelligence applications to make organizations more efficient, effective, or successful. However, applications must address critical challenges and fit within an organization’s workflow in order to reach this potential. Via Science believes its client service team supports the success of both its applications and its clients by communicating key information across key stakeholders. We are passionate about making sure the tools we build are highly customized, user-friendly, and flexible to grow and change with evolving client needs.

Via Science on the Road: Beijing and Shenzhen

In March, I traveled to China (Beijing and Shenzhen) at the invitation of energy industry leaders to discuss the role of artificial intelligence (machine learning applications, in particular) in improving the efficiency of electricity generation, transmission, and distribution. This trip was part of Via Science’s ongoing efforts to learn more about industry needs from leaders across the globe and how our applications can best address them.

By Colin Gounden

China is the world’s largest consumer of energy and its high growth rate has led to a massive modernization of its power infrastructure. One current example is the nation’s $1T Belt and Road initiative that plans “to connect Asia, Europe, the Middle East and Africa […] using roads, ports, railway tracks, pipelines, airports, transnational electric grids and even fiber optic lines.” China is also a global leader in smart grid investment. The country is developing energy markets beyond its borders (for example, they previously invested in electricity infrastructure upgrades in Pakistan).

As a result of China’s recent and active investments in power grid infrastructure, the country faces very different problems than those posed by the aging transmission and distribution infrastructure of the U.S. (i.e., reliability and affordability). In fact, China’s modern infrastructure provides vast amounts of real-time streaming data on operations and analytics has been applied for some time. There are some areas of overlap, however, such as theft of service: China and the U.S. experience the same percentage (6%) of power loss due to theft and unsecure infrastructure. The use of AI to address theft of service is increasing and an area that Via Science is exploring now.

I was also lucky to receive a little press in a leading technology publication while there. Here is a link, for those of you wishing to brush up on your Mandarin.

Using Artificial Intelligence to Identify Energy Theft

The modernization of the electric power system to smart grid technologies has been a hot topic for the last decade and will continue to be for the decade to come. Smart grid technologies promise many benefits to customers including better reliability and dynamic pricing. For power companies however, a major challenge has been the rise in hacking smart meters and stealing electricity, also known as “theft of service.”

By Jeremy Taylor

An important component of smart grids is advanced metering infrastructure (AMI), a system that collects and analyzes individual energy usage data. With AMI, power companies can offer demand response services and cut costs by performing fewer manual inspections. However, since these systems are automated, they are susceptible to manipulation and can leave power companies vulnerable to energy theft. Energy is estimated to be the third most stolen item in the U.S. after credit cards and cars, so its clear that preventing theft is a high priority for power companies.

Energy theft occurs at the individual consumer level. To detect and prevent energy theft, power companies must leverage AMI data to identify individual consumers whose billed consumption profile does not match their actual consumption. In order to identify whether an individual customer has had their energy stolen or is committing theft themselves, power companies need to both build individual consumer profiles to detect changes in usage behavior over time and reduce false positives by incorporating subject matter expertise, often only available as human intuition. At Via Science, we have solved an analogous problem in cybersecurity that directly translates to the energy theft challenge facing power companies.

Building Individual Consumer Profiles to Detect Changes in Behavior

Identifying anomalies in an individual’s activity is an important problem in cybersecurity. At Via Science, we have developed systems that monitor activities of individual users on a company’s IT network to build individual profiles of user behavior and detect and compare changes in user behavior over time. For instance, the system looks at the amount of emails sent to external addresses during the course of a typical workday. A spike in emails to external addresses could signal an employee sending documents to a competitor. This indicator is not perfect because there are legitimate reasons for employees to be sending documents via email, like sending a proposal to a potential client. A generic system may flag sudden increases in email attachments like this and would thus lead to many false positives. When legitimate usage is flagged, it would unnecessarily occupy already overburdened investigative resources. In the case of energy usage, a sudden decrease in power consumption may signal energy theft or that a customer has simply gone on vacation. False positives are a major issue in any investigative analytic system because each organization always faces limited and scarce resources.

Solving for False Positives

Cyber investigators develop expertise that combine user behavior as well as intuition about what activities are likely indicators of cyber-espionage. Investigators of energy theft develop similar expertise. At Via Science, we have built human-in-the-loop systems that combine the best of Artificial Intelligence (learning on massive amounts of data using fast processing at vast scale) with human expertise to better detect suspicious user activity. To do this, we created an application that presents investigators with the most suspicious user profiles on a daily basis using our AI tools (see image below). The expert then flags each profile as a case that warrants further investigation or a case that, based on their judgment, is not suspicious. Each day an expert may be presented with a small number of cases (say 20) to review. Over time, a knowledge base is built up that reflects expert judgment, and the system learns from that knowledge base to better predict potential risks. Research has shown that this method can reduce false positive rates by a factor of 3x or more. That translates directly into increased ROI for investigative budgets.

Human-in-the-loop systems allow experts to provide feedback to the AI application, thus refining its accuracy over time.

Via Science on the Road: Taiwan and Japan

During recent travels to Taiwan and Tokyo, CEO Colin Gounden met with business and thought leaders in energy to discuss how AI, machine learning, and analytics can solve the urgent challenges facing the industry.

Living Our Values: Via Science Offsites

Twice a year, Via Science’s Cambridge and Montreal teams come together to put two of our guiding values: “Be each other’s biggest fan,” and “Learning never goes out of style” into action.

By Via Science Marketing

Via Science is proud to have offices in two booming tech hubs: Cambridge, Massachusetts and Montreal, Quebec. Both locations are known for their cutting edge university research (e.g. MIT, Harvard, and McGill), global investments in innovative tech, and local iterations of international brands (like Google and Microsoft).

Despite the physical distance between locations, we remain close through daily communication between our Cambridge-based commercial team and our Montreal-based technical team. Drawing from agile methodology, we start each day with scrum: an opportunity for team members to share what they accomplished yesterday, what they will continue working on today, and any blockers for their progress. In addition, a flurry of Slack messages, Google Hangout meetings, and good old-fashioned phone calls allow us to stay up to date on client projects and product development.

However, nothing quite beats face time (and we mean the in-person type, Gen Zers!) when it comes to collaborating with colleagues to discover new solutions or ways of thinking. So, twice a year we get the whole company together for an offsite in Montreal to reconnect, reflect, and plan for the year ahead. Our next one is coming up later this month.

The goal of every Via Science offsite is, to quote the wise words of Winnie the Pooh, to “stop bumping for a moment” and think of a new way to come downstairs. Workdays can quickly fill up with tasks, challenges, and roadblocks that can make it hard to take a step back and consider the big picture: Why spend time developing this algorithm, or this blog post, or this customer memo?

Offsites strive to bring those daily tasks full circle by illustrating their impact across departments and on the company overall. Taking this time to reflect and consider the context of our individual projects allows us to discover more efficient and effective ways to move forward together. We use a new theme each offsite to position these reflections within the context of our larger objective for the year. Previous themes have included Focus, Expediency, and Learning Never Goes out of Style.

Individual presentations are one way we reflect as a team. Team members have the opportunity to share their expertise and progress, and hear from colleagues across departments on how their work has supported other projects. For example, during our most recent offsite last September, we were able to trace how one blog post from 2015 led to a key introduction and ultimately, the addition of one of Via Science’s most valued partners. In fact, our technical team spent much of 2016 collaborating with this partner to develop a new application. While links between our roles isn’t always obvious, this example underscores the importance of all departments working towards shared objectives and a shared vision.

In addition to providing a platform for team members to interact, offsites also create direct access to our CEO and his vision for the company. During Q&A sessions and retrospectives, Colin shares the strategic goals for the year ahead and how each team can support these goals. Team members have the opportunity to provide direct feedback that helps to shape the company vision.

Offsites aren’t all work and no play, however. We make it a point to plan an inclusive, interactive activity in Greater Montreal, like bowling or cooking classes to build on the strong team bond reinforced throughout the day’s presentations. Most recently, we enjoyed a cooking class at Académie Culinaire, pictured above.

Investing the time and resources of the entire company for a multi-day offsite may seem like a luxury for lean teams like ours. But, we firmly believe creating uninterrupted time to connect as a team and individuals is essential rather than luxury. Two of our guiding values are to “Be each other’s biggest fan” and “Learning never goes out of style,” and we put those in action through these opportunities to reflect, brainstorm, and learn as a whole team. Our people-first company culture shapes all that we do, and is not something we will lose sight of as we continue to grow.

You Asked, We Answered: The Technical Day-to-Day for a Via Science Data Scientist

Last year, we interviewed our Montréal-based Data Scientists to share their insights about Via Science’s company culture. Based on feedback we received from our readers, we decided to share a more technical look at the day-to-day of our data scientists. We spoke with two of the team’s newest additions to discuss their experiences with Via Science’s preferred technologies, diverse clients and projects and the overall learning environment.

By Via Science Marketing

What is it like to be a data scientist at Via Science?

“I get to work with really smart people from different backgrounds like software and computer engineering, theoretical statistics, bio-statistical analysis and particle physics. Even when we’re not tackling the same problems, we discuss the pros and cons of different approaches, which I find very valuable.”

“Not everyone has the same view on the best way to move forward with a problem. The benefit of having different views and experiences is that in the end, we find the best possible solution.”

How is working on a team of data scientists for a data science company different than working as a data scientist at a company where that’s not the core offering?

“The keyword here is team. My background is in a university setting, so I’m used to doing things on my own and not having a lot of interaction. Via Science is the exact opposite. If you have a problem, you just ask. 99% of the time someone has the answer and you can move forward together much faster instead of banging your head against a wall.”

“Before joining Via Science, I was working as the only data scientist at a retail company. While one benefit of that environment was that I always got to do things my way, I wished there was someone else available to bounce ideas off of and make sure I was on the right path. You can learn how to do things by yourself and how to network outside your company, but at a certain point you feel like you’re not developing anymore without a broader team to push you.”

What has your experience been with Via Science’s Agile office environment?

“The daily morning scrums are a great way to get a glimpse of what the rest of the company is doing across various client projects.”

“The agile methodology allows us to reevaluate project progress regularly with daily scrums and biweekly company demos, so we have opportunities to try new approaches or take a different direction along the way. Our tools (e.g. maven projects, bit bucket, eclipse ide, junit tests) help keep each others’ contributions and changes as seamless as possible. So, if the client wants to change something part way through the sprint, our tools allows us to refactor and test that code promptly.”

If you could allocate a percentage around the tasks you spend your time on during any given week, what would that look like?

“With new clients and new projects, I spend about 30% of my time reviewing, cleaning and understanding the data – using R and Python to get a quick view. I spend another 20% building the project pipeline and thinking about how we get from the position we are currently in with the data to where the client needs us to be. I use Focus™ to explore the data and we use our daily Scrum meetings to review and plan with the team. I spend the rest of my time (about 50%) using Focus™ to get results, predictions and correlations, and on quality assurance and preparing the presentation.”

“I spend about 80% of my time building pipelines, debugging code, creating tests to make sure we can rely on the results we get and adapting computing infrastructure like Java to meet the requirements at hand. I spend the remaining 20% analyzing data, looking at literature and improving my skills with various technologies.”

How much of a typical day is spent doing something you already know vs. learning something new?

“I learn new things every day. When we tackle a new project, we have to ramp up quickly and learn any new technologies we might incorporate to our stack to achieve the best results for our clients. On any given project, I spend about 20% of my time learning these new technologies and about 80% of my time executing. That distribution can also change from project to project.”

“90% of my work is focused on “new” things if you are referring to the content and framework for different projects. Even when I’m doing something I know, I’m still learning because every client has different data, criteria, expectations and goals. Even repeat clients want different things with each new project, so you have to think about doing the things you already know in new ways.

About 10% of my work is completely new in terms of tools and platforms. Recently, I needed to launch an AWS CFN cluster in the cloud for a client demo, which I had never done before. It turned out to be super complex, but I was able to work with Jeremy, our lead scientist, on it and got the outcome we needed for the project.”

What is the learning environment like for new technologies, approaches and skills?

“Via Science encourages the exploration of new technologies and approaches. There is always an opportunity to learn something new and do something completely different from what we already know works. Also, if you just ask around, there will be people willing to help and you may learn something you wouldn’t have learned otherwise.”

“Everyone here is a lifelong learner. We try new technologies on a regular basis and decide if it makes sense to add to our stack. And it’s ok not to know the answer and take time to figure it out. What’s expected here is that people are up for the challenge.”

What are the common software platforms or tools (in the technology stack) that you use most regularly at Via Science?

“We are proactive about investigating new and exciting technologies to see what works for us. We use the Java programming language as a workhorse, Javascript to create user interfaces, Docker to ensure reproducibility, and AWS infrastructure to avoid relying on mainframe computers and provide added value to clients. To improve day-to-day communication, we use Slack. It is a great communication tool that allows us to be efficient, well informed and find solutions.”

“We use MongoDB to build schema, Ansible Playbooks to make sure every computer in a given cluster gets the same software versions and R to get a quick analysis of a new dataset. Python, Java, Spring framework, SciPy and Weka are some other tools we use for projects.”

Was there anything else you would like to share with people interested in joining Via Science?

“It was great to interview at the office on a regular day for the team. I saw the team’s interactions and got a feel for what the environment was like. I think prospective candidates should learn more about the technology stack to see what they already know, take time to explore things they don’t know and decide if Via Science feels like the right fit.”

“One thing that helped me quite a bit was knowing the basics of what Via Science does and that the REFS™ engine is based on Java. If you’ve done data science work before, you probably have a sense of it. The home assignment during the interview also helped because the data is what you’ll see in real projects.

“Before officially joining, I did not know I would do so much non-data science work like using Spring framework, clusters and Javascript, but it keeps the neurons firing, and you keep researching and learning how to do new things. There are two distinct components of my job: applying knowledge I already have to different data structures and doing completely new things with data that I’ve never faced before in my life.”

Any final thoughts?

“I think we embody Via Science’s value of “never stop learning” and having fun while we do it. We work hard, but we have a good time and I think that’s the key to work-life balance. We really hit the nail on the head with that one. It creates a very motivating atmosphere – it’s easier to work hard and achieve results when you’re surrounded by friends.”

“Startups are about rolling up your sleeves and doing whatever comes your way.”

Big Math Spotlight: Jim Creighton

Jim Creighton, Head of Manifold Cluster Analysis and Chief Investment Officer at Manifold Partners, sat down with our team to discuss the growing trend of using machine learning and large-scale data analysis to better control risk and predict outcomes in the investment world and what he learned using Via Science’s approach.

By Via Science Marketing

Give us a little background on yourself.

I grew up in a small village in Nova Scotia (Tatamagouche), graduated in mathematics from Dalhousie University, and then quite by accident became an actuary. While working for a large insurance company in Halifax I discovered that assets were more interesting than liabilities. That eventually led to moving to Toronto and starting, with several other people, a firm that became one of the larger quant firms in Canada in the late 1980s. Following this, I headed up Barclays Global Investors (BGI) in Canada, and then took on the role of Global Chief Investment Officer for BGI in San Francisco.

I was also a Global Chief Investment Officer for Deutsche Asset Management in New York and later, the same role for Northern Trust in Chicago. After being the Chief Investment Officer for three of the largest and best quant firms in the business globally, I was ready to go back to investment research. I left Northern to start a firm applying machine learning methods and large-scale data analysis to the investment world.

You have served as Chief Investment Officer to three of the world’s largest banks and managed over $1 trillion in assets throughout your finance career. What first attracted you to this industry and what has kept you going?

I have always tackled jobs that fascinated me, so I was always motivated by the challenge.

The investment world is filled with very bright, interesting people, which makes it difficult to view this as “work”. I get to work each day with great people on difficult problems. What could be better?

The fact that machine learning techniques we have been applying for some years suddenly have become something of a rage has made my work life more rewarding than ever.

Quantitative trading has been applied to finance for more than 40 years with its share of successes and failures. How have you seen it evolve throughout your career and what’s different now?

What we did 20 years ago was very simplistic by today’s standards. Quantitative methods in investment management evolved very slowly and were well behind the methods being used in many other areas of business and science. For example, for many years we have understood that markets are not linear in nature and financial distributions are certainly not Gaussian in character. Yet quants have persisted in using ever more refined methods based on assumptions that are fundamentally flawed.

That is all starting to change and it’s changing rapidly. Quants in finance are starting to understand machine learning and other data analysis techniques that have been developing over the past 20 years can indeed be applied to financial markets. That means better predictions, better risk control and ultimately, better outcomes for clients.

It turns out that if you have the right machine learning techniques, data and computing power, you do not have to make simplifying assumptions like linearity and normality. The answers are in the data, if you can just apply the right machine learning techniques to determine the relationships between the data and outcomes.

When most people think of quantitative trading, they think of very high frequency trading. That is not true for Manifold Cluster. Can you share more about your approach?

Quantitative approaches, including machine learning techniques, can be used in a variety of ways. It is certainly the case that high frequency trading is necessarily highly quantitative and automated. And high frequency trading has had a lot of press coverage so it is relatively well known. But from the point of view of prediction complexity, making a prediction for the next second is a simper problem than making a prediction for the next month or more. That is in part because many more things can happen over the next month that will influence outcomes than is the case for the next second. It is like trying to predict the weather the next minute versus the next month.

At Manifold, we are interested in making predictions over longer time periods. We consider a number of factors that influence investor behavior and potential outcomes over weeks or months. There is a lot of data and trillions of calculations are required. So we use high-end computing, data analysis and machine learning to examine how investors reacted in the past in specified circumstances. We use past investor behavior in highly similar circumstances to today to make a prediction about how investors will treat a stock going forward from today.

Do you see artificial intelligence and machine learning as a growing trend in the industry? What interests you most about the potential and promise of this technology?

When we started applying machine learning to financial markets, the subject was never mentioned in the financial press and we were not taken seriously by other quants, who generally were of the view that if you could not write down the equations linking factor values to returns, you could not trust the “model”. Even in other branches of business and science the mention of machine learning was rare. Now you cannot pick up an industry publication without seeing a reference to some form of artificial intelligence or machine learning.

The growth in interest and understanding has been exponential over the past couple of years. What excites me is that the investing world is starting to wake up to the potential to make better predictions in one of the most challenging and complex systems humans deal with – financial markets. Better predictions mean better outcomes, and that helps everyone.

Manifold and Via Science both have very specific but different approaches to solving problems. What interests you most about using them in combination to improve investment strategies? What value does Via Science’s Bayesian networks approach offer Manifold?

There are many forms of machine learning with different strengths and weaknesses. In a complex problem like predicting future stock prices, it is quite likely that two different types of machine learning will have some overlap. But it is also the case that each system will likely pick up some information in the data missed by the other approach. That is exactly what we find when we use both our approach to machine learning and that of Via Science. Using our factors, Via Science picks up information that we do not pick up and vice versa. We find that combining the signals gives a nice improvement in results over using just one approach, even though both machine learning approaches are using exactly the same data set to make predictions.

What is the next big thing for investing, and Manifold in particular?

We strongly believe that the “next big thing” in quantitative investing is the rapid growth in application of machine learning to financial markets. I would argue that over the next number of years machine learning will become the primary method for making predictions in financial markets.

As for Manifold, we are continuing to extend and refine our application of machine learning to markets. One area we are looking at presently is “feedback loops,” where results are analyzed rapidly and used to adjust predictions in light of experienced prediction accuracy. This looks very promising and is a way to better incorporate changes in markets structure and investor behavior.

Another area we are thinking about is “layering” machine learning techniques, where one layer would oversee and refine some inputs for the next layer. This again is a way to make sure the machine learning systems are evolving along with the world it is attempting to predict. Generally, layered levels of machine learning should evolve faster and make better predictions than a single level and approach to machine learning. We want our forms of artificial intelligence to evolve, much as human brains do with experience, in their ability to understand and predict the world around them.