Speaker: Nurfadhlina Mohd Sharef, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia
Interactive machine learning (IML) is also known as human-in-the-loop (HITL), and is an increasingly important area in future research due to the fact that knowledge learned by machine learning cannot win human domain knowledge. For example, traditional ML methods are inefficient in handling small datasets or complex datasets, whereas IML approaches are effective for handling this problem. IML can support machine learning modeling in small data and allows humans intervention to improve the modeling, and the model to continuously learn by the interactions. The main goal is to design adaptive agents that support meaningful and beneficial interaction with humans. However, how the ML mechanism, well-defined on statistical assumptions, work on practical data, and how is the ML model updated in each iteration according to input features, are not transparent and usually ignored. Especially when applied to specific applications, it is essential to study when and why the ML algorithms work better or worse than expected, and then adjust the model accordingly. This indicates the need of a transparent ML tool that can clearly show the learning process to users and can significantly facilitate the exploration of ML models. On the other hand, burdening human users for tuning the ML model may be complicated. Therefore, an IML framework that utilizes deep reinforcement learning and generative adversarial network methods in the DT are needed to maximize a reward function in optimising the ML models, whilst including human interaction for semi-autonomous decision making. This talk will share findings about deep reinforcement applied in several digital twin environments, such as recommender system, student grade prediction, biodiversity and supply chain.