Dr. Mike Flaxman, VP or Product Administration at HEAVY.AI – Interview Sequence

Date:

Share post:

Dr. Mike Flaxman is at the moment the VP of Product at HEAVY.AI, having beforehand served as Product Supervisor and led the Spatial Information Science follow in Skilled Companies. He has spent the final 20 years working in spatial environmental planning. Previous to HEAVY.AI, he based Geodesign Technolgoies, Inc and cofounded GeoAdaptive LLC, two startups making use of spatial evaluation applied sciences to planning. Earlier than startup life, he was a professor of planning at MIT and Trade Supervisor at ESRI.

HEAVY.AI is a hardware-accelerated platform for real-time, high-impact information analytics. It leverages each GPU and CPU processing to question large datasets rapidly, with assist for SQL and geospatial information. The platform contains visible analytics instruments for interactive dashboards, cross-filtering, and scalable information visualizations, enabling environment friendly massive information evaluation throughout varied industries.

Are you able to inform us about your skilled background and what led you to affix HEAVY.AI?

Earlier than becoming a member of HEAVY.AI, I spent years in academia, finally educating spatial analytics at MIT. I additionally ran a small consulting agency, with quite a lot of public sector shoppers. I’ve been concerned in GIS initiatives throughout 17 international locations. My work has taken me from advising organizations just like the Inter American Improvement Financial institution to managing GIS know-how for structure, engineering and development at ESRI, the world’s largest GIS developer

I keep in mind vividly my first encounter with what’s now HEAVY.AI, which was when as a marketing consultant I used to be accountable for state of affairs planning for the Florida Seashores Habitat Conservation Program.  My colleagues and I had been struggling to mannequin sea turtle habitat utilizing 30m Landsat information and a pal pointed me to some model new and really related information – 5cm LiDAR.   It was precisely what we would have liked scientifically, however one thing like 3600 occasions bigger than what we’d deliberate to make use of.  Evidently, nobody was going to extend my price range by even a fraction of that quantity. In order that day I put down the instruments I’d been utilizing and educating for a number of a long time and went in search of one thing new.  HEAVY.AI sliced by and rendered that information so easily and effortlessly that I used to be immediately hooked.

Quick ahead just a few years, and I nonetheless suppose what HEAVY.AI does is fairly distinctive and its early guess on GPU-analytics was precisely the place the trade nonetheless must go. HEAVY.AI is firmly focussed on democratizing entry to massive information. This has the info quantity and processing pace element after all, primarily giving everybody their very own supercomputer.  However an more and more necessary facet with the appearance of enormous language fashions is in making spatial modeling accessible to many extra individuals.  Nowadays, somewhat than spending years studying a posh interface with hundreds of instruments, you’ll be able to simply begin a dialog with HEAVY.AI within the human language of your alternative.  This system not solely generates the instructions required, but additionally presents related visualizations.

Behind the scenes, delivering ease of use is after all very tough.  At present, because the VP of Product Administration at HEAVY.AI, I am closely concerned in figuring out which options and capabilities we prioritize for our merchandise. My in depth background in GIS permits me to essentially perceive the wants of our clients and information our improvement roadmap accordingly.

How has your earlier expertise in spatial environmental planning and startups influenced your work at HEAVY.AI?

 Environmental planning is a very difficult area in that it’s essential to account for each totally different units of human wants and the pure world. The overall resolution I discovered early was to pair a technique often called participatory planning, with the applied sciences of distant sensing and GIS.  Earlier than deciding on a plan of motion, we’d make a number of eventualities and simulate their constructive and damaging impacts within the laptop utilizing visualizations. Utilizing participatory processes allow us to mix varied types of experience and resolve very advanced issues.

Whereas we don’t sometimes do environmental planning at HEAVY.AI, this sample nonetheless works very properly in enterprise settings.  So we assist clients assemble digital twins of key components of their enterprise, and we allow them to create and consider enterprise eventualities rapidly.

I suppose my educating expertise has given me deep empathy for software program customers, notably of advanced software program methods.  The place one pupil stumbles in a single spot is random, however the place dozens or lots of of individuals make comparable errors, you realize you’ve bought a design difficulty. Maybe my favourite a part of software program design is taking these learnings and making use of them in designing new generations of methods.

Are you able to clarify how HeavyIQ leverages pure language processing to facilitate information exploration and visualization?

Nowadays it appears everybody and their brother is touting a brand new genAI mannequin, most of them forgettable clones of one another.  We’ve taken a really totally different path.  We consider that accuracy, reproducibility and privateness are important traits for any enterprise analytics instruments, together with these generated with massive language fashions (LLMs). So now we have constructed these into our providing at a elementary stage.  For instance, we constrain mannequin inputs strictly to enterprise databases and to offer paperwork inside an enterprise safety perimeter.  We additionally constrain outputs to the most recent HeavySQL and Charts.  That implies that no matter query you ask, we are going to attempt to reply together with your information, and we are going to present you precisely how we derived that reply.

With these ensures in place, it issues much less to our clients precisely how we course of the queries.  However behind the scenes, one other necessary distinction relative to client genAI is that we tremendous tune fashions extensively towards the particular sorts of questions enterprise customers ask of enterprise information, together with spatial information.  So for instance our mannequin is great at performing spatial and time sequence joins, which aren’t in classical SQL benchmarks however our customers use day by day.

We bundle these core capabilities right into a Pocket book interface we name HeavyIQ. IQ is about making information exploration and visualization as intuitive as attainable by utilizing pure language processing (NLP). You ask a query in English—like, “What were the weather patterns in California last week?”—and HeavyIQ interprets that into SQL queries that our GPU-accelerated database processes rapidly. The outcomes are offered not simply as information however as visualizations—maps, charts, no matter’s most related. It’s about enabling quick, interactive querying, particularly when coping with massive or fast-moving datasets. What’s key right here is that it’s typically not the primary query you ask, however maybe the third, that basically will get to the core perception, and HeavyIQ is designed to facilitate that deeper exploration.

What are the first advantages of utilizing HeavyIQ over conventional BI instruments for telcos, utilities, and authorities businesses?

HeavyIQ excels in environments the place you are coping with large-scale, high-velocity information—precisely the form of information telcos, utilities, and authorities businesses deal with. Conventional enterprise intelligence instruments typically battle with the amount and pace of this information. For example, in telecommunications, you might need billions of name data, nevertheless it’s the tiny fraction of dropped calls that it’s essential to deal with. HeavyIQ permits you to sift by that information 10 to 100 occasions sooner due to our GPU infrastructure. This pace, mixed with the flexibility to interactively question and visualize information, makes it invaluable for threat analytics in utilities or real-time state of affairs planning for presidency businesses.

The opposite benefit already alluded to above, is that spatial and temporal SQL queries are extraordinarily highly effective analytically – however might be gradual or tough to jot down by hand.   When a system operates at what we name “the speed of curiosity” customers can ask each extra questions and extra nuanced questions.  So for instance a telco engineer may discover a temporal spike in gear failures from a monitoring system, have the instinct that one thing goes unsuitable at a specific facility, and examine this with a spatial question returning a map.

What measures are in place to forestall metadata leakage when utilizing HeavyIQ?

As described above, we’ve constructed HeavyIQ with privateness and safety at its core.  This contains not solely information but additionally a number of sorts of metadata. We use column and table-level metadata extensively in figuring out which tables and columns include the data wanted to reply a question.  We additionally use inner firm paperwork the place supplied to help in what is named retrieval-augmented technology (RAG). Lastly, the language fashions themselves generate additional metadata.  All of those, however particularly the latter two might be of excessive enterprise sensitivity.

In contrast to third-party fashions the place your information is often despatched off to exterior servers, HeavyIQ runs regionally on the identical GPU infrastructure as the remainder of our platform. This ensures that your information and metadata stay below your management, with no threat of leakage. For organizations that require the best ranges of safety, HeavyIQ may even be deployed in a very air-gapped atmosphere, making certain that delicate data by no means leaves particular gear.

How does HEAVY.AI obtain excessive efficiency and scalability with large datasets utilizing GPU infrastructure?

The key sauce is actually in avoiding the info motion prevalent in different methods.  At its core, this begins with a purpose-built database that is designed from the bottom as much as run on NVIDIA GPUs. We have been engaged on this for over 10 years now, and we really consider now we have the best-in-class resolution in relation to GPU-accelerated analytics.

Even the very best CPU-based methods run out of steam properly earlier than a middling GPU.  The technique as soon as this occurs on CPU requires distributing information throughout a number of cores after which a number of methods (so-called ‘horizontal scaling’).  This works properly in some contexts the place issues are much less time-critical, however typically begins getting bottlenecked on community efficiency.

Along with avoiding all of this information motion on queries, we additionally keep away from it on many different widespread duties.  The primary is that we are able to render graphics with out shifting the info.  Then in order for you ML inference modeling, we once more try this with out information motion.  And in the event you interrogate the info with a big language mannequin, we but once more do that with out information motion. Even if you’re a knowledge scientist and wish to interrogate the info from Python, we once more present strategies to do that on GPU with out information motion.

What meaning in follow is that we are able to carry out not solely queries but additionally rendering 10 to 100 occasions sooner than conventional CPU-based databases and map servers. While you’re coping with the large, high-velocity datasets that our clients work with – issues like climate fashions, telecom name data, or satellite tv for pc imagery – that form of efficiency enhance is totally important.

How does HEAVY.AI keep its aggressive edge within the fast-evolving panorama of massive information analytics and AI?

That is an amazing query, and it is one thing we take into consideration continuously. The panorama of massive information analytics and AI is evolving at an extremely speedy tempo, with new breakthroughs and improvements taking place on a regular basis. It actually doesn’t harm that now we have a ten 12 months headstart on GPU database know-how. .

I feel the important thing for us is to remain laser-focused on our core mission – democratizing entry to massive, geospatial information. Meaning regularly pushing the boundaries of what is attainable with GPU-accelerated analytics, and making certain our merchandise ship unparalleled efficiency and capabilities on this area. A giant a part of that’s our ongoing funding in growing customized, fine-tuned language fashions that really perceive the nuances of spatial SQL and geospatial evaluation.

We have constructed up an in depth library of coaching information, going properly past generic benchmarks, to make sure our conversational analytics instruments can have interaction with customers in a pure, intuitive manner. However we additionally know that know-how alone is not sufficient. We now have to remain deeply related to our clients and their evolving wants. On the finish of the day, our aggressive edge comes all the way down to our relentless deal with delivering transformative worth to our customers. We’re not simply retaining tempo with the market – we’re pushing the boundaries of what is attainable with massive information and AI. And we’ll proceed to take action, irrespective of how rapidly the panorama evolves.

How does HEAVY.AI assist emergency response efforts by HeavyEco?

We constructed HeavyEco after we noticed a few of our largest utility clients having important challenges merely ingesting right this moment’s climate mannequin outputs, in addition to visualizing them for joint comparisons.  It was taking one buyer as much as 4 hours simply to load information, and if you end up up towards fast-moving excessive climate circumstances like fires…that’s simply not ok.

HeavyEco is designed to offer real-time insights in high-consequence conditions, like throughout a wildfire or flood. In such eventualities, it’s essential to make selections rapidly and primarily based on the absolute best information. So HeavyEco serves firstly as a professionally-managed information pipeline for authoritative fashions corresponding to these from NOAA and USGS.  On prime of these, HeavyEco permits you to run eventualities, mannequin building-level impacts, and visualize information in actual time.   This offers first responders the important data they want when it issues most. It’s about turning advanced, large-scale datasets into actionable intelligence that may information rapid decision-making.

In the end, our objective is to provide our customers the flexibility to discover their information on the pace of thought. Whether or not they’re working advanced spatial fashions, evaluating climate forecasts, or making an attempt to determine patterns in geospatial time sequence, we would like them to have the ability to do it seamlessly, with none technical boundaries getting of their manner.

What distinguishes HEAVY.AI’s proprietary LLM from different third-party LLMs by way of accuracy and efficiency?

Our proprietary LLM is particularly tuned for the sorts of analytics we deal with—like text-to-SQL and text-to-visualization. We initially tried conventional third-party fashions, however discovered they didn’t meet the excessive accuracy necessities of our customers, who are sometimes making important selections. So, we fine-tuned a variety of open-source fashions and examined them towards trade benchmarks.

Our LLM is rather more correct for the superior SQL ideas our customers want, notably in geospatial and temporal information. Moreover, as a result of it runs on our GPU infrastructure, it’s additionally safer.

Along with the built-in mannequin capabilities, we additionally present a full interactive person interface for directors and customers so as to add area or business-relevant metadata.  For instance, if the bottom mannequin doesn’t carry out as anticipated, you’ll be able to import or tweak column-level metadata, or add steerage data and instantly get suggestions.

How does HEAVY.AI envision the position of geospatial and temporal information analytics in shaping the way forward for varied industries?

 We consider geospatial and temporal information analytics are going to be important for the way forward for many industries. What we’re actually targeted on helps our clients make higher selections, sooner. Whether or not you are in telecom, utilities, or authorities, or different – being able to research and visualize information in real-time is usually a game-changer.

Our mission is to make this sort of highly effective analytics accessible to everybody, not simply the massive gamers with large sources. We wish to make sure that our clients can reap the benefits of the info they’ve, to remain forward and resolve issues as they come up. As information continues to develop and develop into extra advanced, we see our position as ensuring our instruments evolve proper alongside it, so our clients are at all times ready for what’s subsequent.

Thanks for the nice interview, readers who want to be taught extra ought to go to HEAVY.AI.

Unite AI Mobile Newsletter 1

Related articles

Ubitium Secures $3.7M to Revolutionize Computing with Common RISC-V Processor

Ubitium, a semiconductor startup, has unveiled a groundbreaking common processor that guarantees to redefine how computing workloads are...

Archana Joshi, Head – Technique (BFS and EnterpriseAI), LTIMindtree – Interview Collection

Archana Joshi brings over 24 years of expertise within the IT companies {industry}, with experience in AI (together...

Drasi by Microsoft: A New Strategy to Monitoring Fast Information Adjustments

Think about managing a monetary portfolio the place each millisecond counts. A split-second delay may imply a missed...

RAG Evolution – A Primer to Agentic RAG

What's RAG (Retrieval-Augmented Era)?Retrieval-Augmented Era (RAG) is a method that mixes the strengths of enormous language fashions (LLMs)...