In Loggly’s view, log data is the big data problem that everyone has. No matter what the company, everyone has log data from all the machines that store the data across distributed architectures.

The company, which today announced a $15 million C round of funding, takes the approach that people need some context, a place to start when looking through the billions of logs that come in all forms of what the technologists will often describe as “unstructured data.”

That context Loggly offers is similar to what an e-commerce company will do to give a good shopping experience. If you are looking for shoes, the site will give you a place to start. It will show different styles, different colors.

In the previous version of tis technology, Loggly offered better scale out capabilities, leveraging Amazon Web Services (AWS) and a co-location center. It made the service simpler to use with what CEO Charlie Oppenheimer relates to his days at Apple when he ran product management for the Macs and MacOS. At Apple, from the start, the people buiilding the product would first envision what the product would eventually become. That’s the approach Loggly did with its service. It first designed the user experience and then did the engineering.

Read More:   Update Test Data? Get Real

In the new version, Loggly has sought to bring a real-time search to the technology which it calls “Dynamic Field Explorer.” (DFE). It presents real-time navigable summaries of logs so the customer can see the big things that are happening, anomalies and other issues.

Existing log management serves allow to type a search but it is a trial and error process. DFE gives a real-time created structure. It infers the structure of the logs. For example, it will process the types, the field, the values for the field and counts for how often the values occur.

The errors give the customer a bird’s eye view of the logs that they can then narrow down to solve a problem. It gives a dynamic catalog of the errors so the customer knows what is happening in the logs. Customers can then get to problems in seconds without having to discover the issues through trial and error.

With DFE, the Loggly technology parses the the data, then indexes it. At that point it has all the strctured information that is needed to build real-time dynamic catalogs.

Loggly’s heavy lifting is parsing and indexing a mass of text and numbers that the machines in the infrastructure generate. Loggly then turns that mass of data into a structure that the customer can use.

Loggly uses a technique called “regular expressions,” to parse the data. It serves as a way of looking for patterns in strings and extracting designated pieces. It can help find things such as which strings begin with a letter and end with a semi-colon.

It then classifies the data. For example, if it is standard log type like an Apache log, Loggly will look for the signature in the data that identified it as Apache. Another approach is to look for JSON to determine the structure of the data. When customers give Loggly JSON data, they are telling the structure up front to unambiguously parse the information.

Read More:   Update Periscope Data Brings Permissions-Filtered Data to Slack

The generic use of parsing is not unique. Splunk, though, starts with queries while Loggly maintains it gives the context first so the customer does not have to go searching for what they need.

Loggly uses Elasticsearch, which does the core indexing. Loggly then adds its own core product features.

Loggly is in the business of solving operations issues that are magnified by the use of highly complex distributed services such as AWS. The what, when and how of a problem gets magnified in a distributed world. Loggly’s goal is to make that management just a little bit easier and less time consuming.

The approach Loggly takes has been a success. It has more than 5,000 customers and according to the company, it is seeing a steady stream of customers who will pay more than $100,000 per year in subscription fees. Funding now totals $33.4 million.