GE’s awesome Minds + Machines conference wrapped up this week. In addition to the conference, a Minds + Machines Hackathon was held. Azuqua sent Phill Ramey and Skyler Hartle down to the Hackathon to participate. Twenty total teams participated in the Hackathon, ranging in teams of two to teams of six.

If you want to skip reading, take a look at this video instead:

So what did Team Azuqua build? First, a bit of background.

To fully understand the depth of the solution implemented for the Hackathon, we need to briefly talk about the Predix service.

Many, Many Wind Turbines

Across the vast state of California, there are many wind turbines. Each of these turbines has a varying array of sensors that capture data about both the turbine and its surrounding environment. The data being collected from just one wind turbine is valuable. The data collected from a farm of wind turbines? That’s incredibly valuable.

Enter: Predix. Predix is a Platform-as-a-Service that is used to enable industrial internet-of-things scenarios. In our wind farm example, Predix might be used to capture and store data from these wind turbines. In essence, Predix allows you to take siloed industrial assets, connect them together, and on aggregate evaluate and analyze the data being returned from these assets.

Two of the core services that allow you to tie this all together are Predix Asset and Predix Time Series. Predix Asset lets you create a model based on a physical asset, and Time Series is a data store that allows you to store any important sensor data from an asset. Time Series acts as a historical catalogue. You can query against it in real-time, analyze it to optimize performance, or implement predictive modeling.

So let’s recap: you digitally model a physical asset in Predix using the Asset service, and you store important sensor data from assets inside the Time Series service.

In practice, there are a number of other important services that can be leveraged to provide further business value to the machines you want to connect to an industrial internet of things.

OK, that’s awesome, let’s hear about what Team Azuqua built?

Back to our wind turbine example. Let’s say that one of the turbines has started reporting an anomalous temperature sensor reading. The temperature of the turbine is far beyond what it should be, possibly indicating a mechanical failure of some sort. What do we do in this scenario? Well, let’s assume that we have service crews out in the field somewhat in proximity to the asset issuing anomalous behavior. Wouldn’t it be nice if we could dispatch a service crew to this asset, in real-time, to service the wind turbine in question?

That’s the application that Azuqua ideated and created during the Hackathon. The logic that powers this concept was built entirely using the Azuqua Platform. Leveraging the innate ability of Azuqua to create applications as microservices, we were able to build an application in twenty four hours that registered temperature readings, assessed the severity, and automatically dispatched the nearest service crew in the area.

Additionally, once a crew reached the asset, they could scan the device via the Azuqua Mobile app and put the device in “Service Mode”. In front of all this data and functionality we also built a user dashboard, to show where all the assets and service crews were currently located, and the status of the service crews and the assets.

So how did a team of two beat twenty other teams (varying in team size from 2-6)? Why, the answer should be obvious: Azuqua!

Each service in our application was a FLO. Retrieving data from Predix Asset; updating the locations of the crews in Predix Dynamic Mapping (a cool Predix service I’ve glossed over); storing temperature data in Predix Time Series; all of these services were critical to driving functionality. And this idea of retrieving, storing, and manipulating data is not unique to Predix; these are the fundamental building blocks of data-driven applications.

The only thing we “coded” during the Hackathon was the user dashboard. This model of creating service logic as FLOs/microservices, and then consuming them on forward-facing applications, is one that we highly advocate.

I should note that although we didn’t “code” anything for the majority of our application’s logic, that isn’t to say that we didn’t architect and develop application architecture and logic. Since Azuqua is a low code development environment, we still spent our time “programming”, but not in the traditional sense! We were able to focus only on the elements of the application that added the most value, instead of being bogged down in code organization, library configuration, or the other elements that can impede your application development progress. For a twenty-four hour hackathon, this was an invaluable advantage.

If you’re interested in learning more about the solution we developed, feel free to email either Skyler Hartle at, or Phill Ramey at