AI seems to be everywhere now, and while I am still waiting for it to do my taxes, AI does have considerable promise to help people learn more about the world around them.  

In earlier blog posts, I described how Hurricane Helene impacted the water quality where I live in North Carolina. I imported Asheville water quality data into Locus software to use with the Locus GIS+ mapping software. My goal was to understand the quality of the drinking water where I live. Locus GIS+ lets you quickly answer these questions, but the interface requires you to select items from various filters, run a query, and compare results to your location on the map. Wouldn’t it be simpler to type a question like “Was E. Coli bacteria found in the drinking water near my home?” into the GIS+ app and get a “Yes” or “No” response? This type of task seems like a great application for AI. 

Locus Technologies continues to explore new frontiers such as a “natural language” AI chatbot for data stored in Locus software. While this task seems simple enough, there are many considerations and challenges for AI developers to address. 

      • Technical: The first challenge is creating a chatbot to answer questions. This challenge is straightforward because there are many existing AI models that can be pulled “off-the-shelf” and integrated into Locus software. Work is needed to convert a question into a database query that returns an answer, but standard tools exist for this task. Care must be taken, though, to ensure the answer provided is correct. AI can sometimes ‘hallucinate’, or provide incorrect information, and such behavior must be avoided. 
      • Scoping: A much harder challenge is parsing the question so that the AI considers the appropriate scope of the question. In many cases, what is unsaid is as important as what is said. Consider the question “Was lead found in the drinking water near my home?” The AI must read between the lines to determine the context and scope. 
        • Temporal: The verb “was” refers to the past; does it mean the last day, week, month, or year?  
        • Geographic: What is “near”? Is it one mile, ten miles, or fifty miles? And where is “my home”? If the user has location services enabled on their device, the AI can easily determine this, but what if the user is asking the question while not at home?  
        • Data: What data should be considered when running the query? The verb “found” in the question can mean many things. 
          • Are we asking for lead that was detected above a level of concern, or is any result allowed? Similarly, should the query include results that were later rejected due to quality assurance checks? 
          • Should the query include all results near the user’s home, or just the ones that are most relevant to the user? For example, after Helene, some locations had elevated levels of lead in the water due to pipes sitting unused for weeks. Once the pipes were flushed, the lead levels dropped significantly. Should all these results be included, or just the post-flush results? As another example, locations in Asheville are served by one of three water sources. Only results from the same source as water at the user’s location should be used. 
      • Transparency: The challenge here is to ensure the user knows exactly what they are getting. As implied above, the AI chatbot must make assumptions about what the user really seeks and then run the query based on its decisions. These assumptions must be presented to the user along with the data.  
        • For example, one possible answer to the question “Was any lead found in the water near my home?” could be “Yes, lead was found at 40 ppb at a sample taken 20 miles from your location on Nov 1, 2025, but it was taken before pipes were flushed. The lead level was only 0.01 ppb at the same location after the flush.” This answer is much better than “Yes”. 
        • Perhaps a better option is to have the AI take each question as a starting point and ask additional questions to clarify what is truly being asked. The user experience thus becomes a conversation between the user and the AI that leads to the desired results. 
      • Ethics: This final challenge is a catch-all category for several related concerns.  
        • First, some data may be flagged as private, and care must be taken to ensure these results are not shown to unauthorized users.  
        • Second, all results should be sourced, so the user knows where the data came from.  
        • Third, because AI is not perfect, chatbot answers must not be presented as authoritative; users should be directed to official sources for more information.  
        • Finally, because the information is not authoritative, the results must not be easily shared without proper context. During the Helene response, the efforts of authorities to restore clean drinking water could have been compromised if citizens caused panic by sharing incorrect or incomplete information.

      The list above may be daunting! Even so, Locus is committed to finding ways to use AI so Locus clients can better use their data to protect the environment at their facilities. Contact Locus for more information about storing your environmental data in the cloud so it can be leveraged by the new generation of AI-powered Locus applications! 

      Locus is the only self-funded water, air, soil, biological, energy, and waste EHS software company that is still owned and managed by its founder. The brightest minds in environmental science, embodied carbon, CO2 emissions, refrigerants, and PFAS hang their hats at Locus, and they’ve helped us to become a market leader in EHS software. Every client-facing employee at Locus has an advanced degree in science or professional EHS experience, and they incubate new ideas every day – such as how machine learning, AI, blockchain, and the Internet of Things will up the ante for EHS software, ESG, and sustainability.

      Interested? Subscribe to our expert newsletter.