joi, 27 octombrie 2022

Usage of AI in Water Consumption

Water vs Oil

Researcher’s predicted water will be scarcer than oil by 2030 and demand is going to exceed supply by 40%. Less than 1% of all water on earth is fit for human consumption. The question for us is how we can continuously monitor water everywhere without locating a lot of manpower to it.

We live in a data-driven economy and AI-driven world where everything is powered by data and artificial intelligence. There is a paradigm within AI known as deep learning that specially focuses on creating models based on the human brain to achieve efficient operations. Artificial intelligence measures data, analyzes, learns, and deduces.

What can we achieve by using AI to monitor Water Consumption


1.We can efficiently collect information about usage, time of usage and so on 

More water used in the mornings and in the evenings


2.Artificial intelligence helps in extracting meaningful insights from the data and guides in altering behavior for effectively managing water

AI helps to:

Track trends and Reduce risk of water shortage
Identify anomalies
Forecast demand and Predict events
Avoid water loss
Maintain stable water supply
Sewer management
Cut the water usage


3.Water waste

     Liters of water are wasted every day because of leakages, old pipes and other reasons. Through analysis and machine learning, we can predict when it would be the best time for a pipe to be replaced or when should we expect a leakage in an area


(Image - Fracta, a California startup that uses AI to detect leakages and take better management and maintenance decisions)


4.Water Management

Municipalities can use sensors and data to analyze and take advantage of AI to improve their management of the water supply. Data can be collected from houses and buildings and help in designing water-saving programs.

5.Avoid expensive maintenance

Using sensors, pipes can be analyzed for water flow and erosion to provide information about ahead of time pipe replacements



Bibliography :




            




The Stand of AI on Fairness

In 2004 the movie I, Robot portrayed a civilization assisted by humanoid robots in which some people (no spoilers, but you should have seen it already) are sceptical of the reliability of these forms of artificial intelligence. As for 2022, the threats of AI nor the use we make of robots follow Hollywoodian scripts, yet some real potential issues must be addressed. 

The impact of predictions and decisions assisted by AI/ML may vary depending on the field. They range from face recognition softwares to medical diagnosis, from marketing analytics to assistance to judges in courts. It is precisely this last type of application that this brief enquiry focuses on: the COMPAS case as a tool to discuss broader implications of AI decision-based systems.


Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a software package for crime prediction. Its aim is to assess a defendant’s probability of reoffending, thus supporting judges in their decisions concerning whether a culprit should be detained before trial or might be released on bail, whether she should go to prison or might be eligible for probation, or whether she should stay in jail or might be a candidate for parole. The model uses the answers to 137 questions and computes a risk coefficient.
It has been used since 2015 in several USA courts. Even though it is now an “old” tool, the use of AI instruments aiding human decision-making is growing, hence it is an excellent example to take into consideration for discussing some ethical matters. 

More specifically, a simple confusion matrix was able to point out an alarming fact: white people were statistically more likely to be falsely classified by the algorithm as “not prone to recidivism” than black people who, on the other hand, were more likely to be falsely classified as possible to be arrested again in case of parole.

This issue is about statistical fairness, a sort of algorithmic discrimination making the expected probability of recidivism different for people with different ethnicities. From here two new matters arise: 

  • Why did it happen?

  • How can we make it fair?

“Race” is not a variable of the model, so why do we see discrimination? In the training set, there may be a significant correlation between race and the predicted variable, not because criminality rates are actually higher in the black population, but because of the systematic racism which punishes, ceteris paribus, blacks more than whites. Hence, the dataset used for training the model is flawed and model predictions cannot ignore that. 

Now that we set an answer to the first question, how can we fix the problem? Should we take a compensatory approach? Should we ask the model to take into account the diversity of rates between groups? It all depends on how we intend to measure fairness. A thorough discussion suggests that the predicted rates should be taken as equal, or at least more equal than they are now. This must be achieved parallelly to some affirmative action to counterbalance the effect of past wrongs in the conscientious sense of not letting past disadvantages determine the future fates of people belonging to discriminated groups. 


Thanks to the article Two Kinds of Discrimination in AI-Based Penal Decision-Making by Dietmar Hübner we learned that before worrying that AI takes control of civilization, we have to be extremely careful about not letting it perpetrate the injustices our society enacted in the past.


Bibliography:

  Hübner, D. (2021) Two Kinds of Discrimination in AI-Based Penal Decision-Making. SIGKDD explorations. [Online] 23 (1), 4–13.


Deep learning and improving air quality forecasts

 

    Air pollution from burning fossil fuels affects human health, but predicting the level of pollution at any time and place is still difficult. Satellite observations on the ground each measure air pollution, but they come with limitations, as stated by many scientists.

    In order to combat this issue, an aproach consisting in many deep learning, should be used in order to analyze the relationship between satellite and infield observations of nitrogen dioxide in some areas.

    A deep learning algorithm works like a human brain and has many layers of neurons designed to process data and create models. The system learns and trains based on the connections it finds in the big data, the scientists said. Scientists tested two deep learning algorithms and found that compared to ground observations and satellite surveys more accurately predicted nitrogen dioxide levels. Adding information such as weather data, elevation, and location of bus stations, major roads, and fire stations also improved forecast accuracy.

   “The challenge here is whether we can find a linkage between measurements from earth’s surface and satellite observations of the troposphere, which are actually far away from each other. That’s where deep learning comes in.” said a scientist that worked on this study.



Sources:

https://www.psu.edu/news/research/story/scientists-turn-deep-learning-improve-air-quality-forecasts/

https://www.freepik.com/free-vector/weather-station-illustration_30038886.htm#query=weather%20station&position=1&from_view=keyword


Disease Symptom Prediction

Introduction: Machine learning is programming computers to optimize a performance using example data or past data. The development and e...