12216

Response Time

98%

31122

Tutor Profile

Subjects

Literature, Psychology

Introduction

Machine Learning (ML) provides organizations with knowledge to make data-driven and more informed decisions, which are faster and leaner as compared to the traditional approaches (L’Heureux et al., 2017). Machine learning is the technology, which enables systems to learn directly from data, examples and experience. Artificial Intelligence is an evolved concept and functions more efficiently and intelligently by not just learning from the past experiences but also learns in real time and responds in real-time adjusting to new data (Mitchell, Michalski & Carbonell, 2013). Explainable AI is a next step to AI wherein the focus is on determining how decisions or predictions are arrived at (Mueller, 2016). This report critically discusses the challenges of machine learning and how Explainable AI can overcome them. The report also discusses the future potential of AI and its impact on machine learning.

Machine Learning and its Challenges

 In machine learning, a large amount of data is used as examples of how a particular task can be achieved or from which patterns can be detected (Mueller, 2016). The system learns how best a task can be completed and desired output can be achieved using the data and patterns available. The amount and sources of data has seen significant increase in recent years (The Royal Society, 2017). The existing approaches face various challenges in handling the increased amount of diverse data and information. One of the common presumptions of ML is that algorithms can learn better with increased data and consequently provide more accurate results (Mueller, 2016). But the increased datasets impose various challenges due to traditional algorithms, not designed to meet the increasing requirements. There are various challenges technical challenges in ML, which include

  • Robustness and Verification
  • Availability of Data
  • Real Time Streaming/Processing of Data

(The Royal Society, 2017; L’Heureux et al., 2017).

In many applications quality of predictions or decisions made by ML systems have to be verifiable. All ML systems are not effective in considering the right data as there are certain limitations to it (The Royal Society, 2017). When the ML systems are used in real world application, they might be exposed to data that are outside range of covered test data and hence results cannot be generalized on deployment of Online machine learning systems (L’Heureux et al., 2017). The learning algorithms are applied to trained model after implementation and it adapts in response to the environment it is interacting with (Mitchell et al., 2013). The behavior and data to which system is exposed keeps changing and can create complex interactions pattern and can create difficulty in ensuring performance of system (The Royal Society, 2017). Many ML approaches depend on availability of data, which highlights that before learning the entire dataset is assumed to be present (L’Heureux et al., 2017). But in situations where continuous streaming of new data exists, it is challenging for the system to learn from the dataset (Mueller, 2016).  This requires retraining of the ML system regularly to reflect the output based on current data. This requires the new system to support incremental learning in order to adopt learning based on new data, which also can be challenging as the type of data might vary (Samek, Wiegand, & Muller, 2017). Most of ML systems are not designed to handle constant data streams providing different type of data, which poses challenge as the machine has to process the fast-arriving real-time data (Mueller, 2016). This can be challenging for ML developers as the algorithms has also to deal with fraud detection and surveillance in order to ensure the real time data is processed without any violation (L’Heureux et al., 2017). This increases complexity of the algorithms and the limited availability of online leaning tools for ML make it quite challenging to consider real time data and process it with increased accuracy.

Explainable Artificial Intelligence (AI)

ML and AI make decisions based on the information available to them without complete verification and deep understanding of the decision making rationale (Mueller, 2016). Explainable AI refers to the techniques in AI that can be trusted and understood by humans unlike the concept of black box in machine learning where the designs also cannot explain about how a specific decision has been arrived at. Explainable AI is essential as explaining rationale behind a decision to other people or in an area is important aspect of human intelligence (Samek et al., 2017). Explainable AI contributes significantly in overcoming the challenges of ML through verification of system, improvement of system, legislation compliance and system learning (Yao, Zhou & Jia, 2018). Explainable AI recognizes the need to verify the data used by the machines to learn and make decisions based on reliable data. The AI models which are designed with explanation concept make inferences from data and recognize if a particular result is true as inputs when fed to the system or they are incorrect (Mueller, 2016). Explainable AI can identify new typologies or scenarios to be adapted by the institutions for making decisions using the AI system (Samek et al., 2017). The hidden insights can help improve model performance and reduce risk of making decisions only on limited data. The systems will have capacity to explain how they discovered the new insights and thus, add value and provide verifiable information (Mitchell et al., 2013). Explainability of AI system decisions allows organizations to understand the entire process more effectively and build trust with AI implementations by considering and integrating the different relevant regulations in order to help businesses, workforce and customers to use AI systems more efficiently (Samek et al., 2017). These features of the Explainable AI can help address the ML issues of robustness and verification, real-time processing and availability of data as it involves verifying the models that collects and integrates different type of data and determines how the decision has been arrived at using different data types.

Explainable AI develops algorithms in order to explain reasoning and characterize strengths and weaknesses of the decision making process (Mueller, 2016). In future, it is aimed to convey sense of how the algorithms behave in future. This system is used in predictive patterns and are used by companies to augment existing decisions and improve business outcomes incrementally (Samek et al., 2017).  Optimizing Mind is a company, which has developed an explainable AI that focuses on neural mechanism of recognizing multi-disciplinary perspective and has carried out extensive work in theory and simulations, experiments on human cognitive and animal neurophysiology and clinical training (Medium, 2018). The predictive models backed by explainable AI are used to monitor and manage accuracy of decisions and enable decision makers to learn from models and make effective business decisions (Mitchell et al., 2013). FICO is one of the financial services giants, who have innovated an explainable AI model that can continuously improve rom expanding data sources while offering transparency to why and how model has come to a respective conclusion (Medium, 2018).

The AI software vendors and enterprise users are forming a consortium to increase fairness, transparency and accountability towards pooling of the considerable resources (Some, 2019). simMachines has developed a proprietary similarity-based machine learning engine, which is developed for specializing in customer experience optimization, forecasting, explainable pattern detection, fraud and compliance (Terekhova, 2018). This explainable AI uses similarity-based learning method to train the algorithms, which enables clients to know why behind each of the prediction and justification for every conclusion drawn from the system. Another explainable AI has been developed by Accenture in 2018, which detects and scrubs embedded biases that includes racial, ethnic and gender bias using the AI system (Terekhova, 2018). It helps in making important decisions about parole, benefits, mortgages and benefit eligibility. The tool helps in measuring algorithms’ predictive parity fairness to check if same number of false positives and negatives among the ethnicities and genders (Yao et al., 2018). It provides developers an insight on how model’s accuracy change the predictive parity. 

Read More

The other development in explainable AI, which would enhance machine learning and help in making better decisions and accurate predictions is Defense Advanced Research Projects Agency’s (DARPA) explainable AI program, which is aimed at developing best practices and machine learning models to create more transparent, yet accurate models called ‘glass box’ models (Some, 2019). DARPA is also investing in developing next generation AI capable of contextual reasoning in order to create trusting, collaborative partnership between machines and humans (Dickson, 2019). Explainable AI is expected to provide an effective tool to organizations in different field to make accurate decisions and predictions using large amount of data and also determine the best model that can carry out the process, but it may cost more (Mueller, 2016). Explainable AI though would be expensive but will reduce certain risks and help in establishing stakeholder trust. There are certain limitations to the explainable AI system, which includes tradeoff of fairness and accuracy as increasing transparency leads to increase in complexity and would require reduction in variables that can affect accuracy (Yao et al., 2018). Another limitation is making it understandable to users as explainable AI can lead to increase in complexity as certain outcomes can be viewed from different perspective by different stakeholders (Mueller, 2016).

Conclusion

Machine Learning enables organizations to make effective decisions, but with increasing amount of data and increasing complexity in business processes there are several challenges in using ML. Th challenges include lack of robustness and verification, availability of data and real-time processing and streaming. Explainable AI has been developed to increase transparency and provide justification for the predictions and decisions made, which is essential for accuracy of decisions and predictions. Explainable AI overcomes challenges of ML by introducing algorithms and approaches where the models are analyzed for their approaches that are used to make decisions and predictions. The latest progress in explainable AI will help in enhancing the ML system by allowing more transparency about collecting data and making decisions.

References

Dickson, B. (2019). Inside DARPA’s effort to create explainable artificial intelligence. TechTalks. Retrieved from

https://bdtechtalks.com/2019/01/10/darpa-xai-explainable-artificial-intelligence/

L’Heureux, A., Grolinger, K., Yamany, H. & Capretz, M. (2017). Machine Learning with Big Data: Challenges and Approaches. IEEE Access. 1-22.

Medium. (2014). The Importance of Explainable AI. Retrieved from

https://medium.com/predict/the-importance-of-explainable-ai-28db06e0c802

Mitchell, R. S., Michalski, J. G., & Carbonell, T. M. (2013). An artificial intelligence approach. Berlin, Springer.

Mueller, E. (2016). Transparent Computers: Designing Understandable Intelligent Systems. US, CreateSpace Independent Publishing Platform.

Samek, W., Wiegand, T. & Muller, K. (2017). Explainable Articifical Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. ITU Journal: ICT Discoveries, 1, -10.

Some, A. (2019). Here is How Augmented Analytics and Explainable AI will cause a disruption in 2019 ad Beyond. Retrieved from

https://www.analyticsinsight.net/here-is-how-augmented-analytics-and-explainable-ai-will-cause-a-disruption-in-2019-beyond/

Terekhova, M. (2018). Why your Firm must embrace Explainable AI to get ahead of the Hype and Understand the Business Logic of AI. HFS Research. Retrieved from

https://www.hfsresearch.com/pointsofview/escape-the-black-box-take-steps-toward-explainable-ai-today-or-risk-damaging-your-business

The Royal Society. (2017). Machine learning: the power and promise of computers that learn by example, Retrieved from

https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf

Yao, M., Zhou, A. & Jia, M. (2018). Applied Artificial Intelligence. USA, TOPBOTS Inc.

Get Assignment Help from Me

TOP
×
Order Notification

Limited Time Offer! - 20% OFF on all Services Get Expert Assistance Today!

X