An AI Discussion with an AI Pioneer in Taiwan

The Interview:Dr. Li-Chen Fu
 
   Artificial intelligence (AI) is currently one of the most important topics among both industries and individuals. The main question is: How will the AI trend impact people, industries and the next generation, and what opportunities might it bring in the next 30 years? We are all eager to know. Today, our guest is Dr. Li-Chen Fu, Distinguished Professor at the Department of Electrical Engineering and Computer Science, National Taiwan University, former President of Asian Control Association (ACA), and former AdCm member of IEEE Robotics and Automation Society (RAS), and current Vice President of IEEE Control Systems Society (CSS), who is going to share with us the past, present and future of the robotics industry, based on his more than 30-year career in the field.
 
1. Why did you choose to study robotics?
 
   The social and economic conditions were unpleasant when I started school. I was born to a family where my father was a civil servant and my mother was a housewife. My siblings and I didn’t have many choices compared with young people today; going to a public university seemed to be the proper choice for us. During my school years, I attended Mingchuan Elementary School, Heping Junior High School, and then Jianguo Senior High School until I entered National Taiwan University. Fortunately, I gradually developed an interest in studying, even though the academic pressure was unrelenting. I was also very fortunate to have had a group of good friends; we would accompany and encourage each other when we were studying. I am particularly grateful to my parents for giving me the opportunity to study at National Taiwan University. Later, I received a government scholarship to study for my Ph.D. at UC Berkeley, U.S.A. Being from an ordinary family, I feel extremely blessed to have had all these opportunities.
 
   My father was also an alumnus of EE Dept. at NTU. Initially, he entered the university as a medical student; however, he changed his major to Electrical Engineering in his second year because he found that field more interesting; after graduation, he joined the Taiwan Power Company and dedicated himself to that company until his retirement. He was almost magical to me when I was a kid: it seemed as if he could resolve any issue related to electricity; he could even make a motor or a voltage transformer from scratch. My dad never truly encouraged me to pursue any specific career, although my mother once encouraged me to be a doctor. Nevertheless, I still believe my father had a significant influence on me: his influence was one of the major reasons I finally chose to pursue a career in EE.
 
   From 1983–1987, I was studying for my Ph.D. at UC Berkeley. Back then, robotics research was at a very early stage, and it was mainly focused on industrial robotics. I was given an opportunity to learn more about robotics at a Lab in UC Berkeley, which had a few educational and industrial robots. It was then that I started to realize that robot behavior can be designed and triggered through coding. I found this idea truly intriguing. My Ph.D. research at the time was actually focused on “adaptive control techniques;” but unexpectedly, robotics became a passion that lasted beyond my official research. After graduation, I took a summer internship at an industrial robotics company to learn more about how to calibrate robots, their applications and the industry as a whole.
 
2. How have the trends in robotics changed in the 30 years since you graduated from UC Berkeley?
 
   After graduating from UC Berkeley and moving back to Taiwan, I felt that robotics research and education was relatively insufficient. Therefore, I thought: “Since robots are triggered by coding, why don’t I set up a robotics lab in the Computer Science department?” The result was that I started leading a group of computer science students, and we officially set up the robotics lab at the National Taiwan University. I also started to teach robotics-related courses and do research through the Computer Science and Electrical Engineering Departments with a view to offer more resources and opportunities for students in Taiwan to learn about robotics. This initiative also marked a starting point for Taiwan’s robotics education.
 
   The early knowledge and application of robotics in Taiwan was actually industrial robots, and our research also focused mainly on industrial robots. At that point, our government was dedicated to developing Taiwan’s industrial robot industry and was hoping to strategically integrate Taiwan’s industrial robots with its industrial automation policies. The most dedicated semi-governmental organization was the Industrial Technology Research Institute, which had already researched and developed approximately 10 prototype industrial robots. From a global point of view, U.S. and Japan were the 2 leading industrial robot countries at that time, and Taiwan faced tremendous challenges in producing our own industrial robots from many points of view: experience, price, accuracy, and the ability to acquire key components. Moreover, the government later switched direction and introduced the foreign labor instead of encouraging deployment of large-scale, advanced industrial automation facilities—a decision that inevitably had a major impact on the development of industrial robots in Taiwan.
 
   Since I had started my research into industrial robotics some 10 years earlier, the industrial robot industry as a whole had gradually become mature. Nevertheless, the idea of service robots was still a very new concept that could largely be found only in research papers. However, the major applications of industrial robots were in factories, so it is not easy for people to understand how they have been developed and used. Not too long ago, starting with the availability of the now-common Robotic Vacuum Cleaner (RoboVac), people have started to have real experiences with service robots. This development signified the beginning of service robots entering people’s lives. The current state of development and application of service robots has been both significant and impressive, from the Da Vinci Surgical Systems found in hospitals to the delivery robots used in the hotel and service industry, people have started to encounter service robots far more often. Moreover, they have started to realize that robots can actually facilitate their daily lives, and they have become far more willing to accept the robots. Thus, the shift I have seen in my 30-year technical career extends from factories to people’s daily lives, from repetitive manufacturing behaviors to diverse, lifestyle applications used on a daily basis.
 
3. What will the opportunities and challenges be in the next 30 years?
 
   The breakthrough of having robots that actually affect people’s daily lives represents the biggest challenge. One reason that industrial robots have gradually matured is that they have a narrow scope: they are meant to be used in factories, an almost unchanging, precise, large-scale and repetitive manufacturing environment that is ideal for industrial robot applications. In contrast, the environments in which people carry out their daily activities are variable and change constantly; therefore, the robots must be smarter, more flexible and more sensitive to surrounding environmental changes to be effective in people’s daily lives. This need for flexibility and intelligence has been the major challenge for the development of service robots for years, and it is the key reason why service robots are not quite ready to be extensively used. Not too long ago, the advent of new artificial intelligence (AI) technology constituted a development breakthrough: through big data and deep learning, robots can make decisions or even take actions based on the surrounding scenarios. The AlphaGo example—the AI robot that beats the world chess champion—says it all. By learning from big data, robots do not have to actually experience a situation to learn about it; unlike human beings, instead, they can learn directly from accumulated big data and, as a result, make even more sensible decisions than can human beings. This significant advance has provided a path for robots to enter people’s lives. In the upcoming 20 to 30 years, as AI technology becomes more mature, I believe robots will be able to have ever more instinctive and human-like interactions with human beings and they will be able to make smarter decisions based on the current situation. If that happens, robots are very likely to become lifetime companions for humans.
 
   Nevertheless, we remain to have a long way to go: AI is a new technology, and it requires much more than just AI technology for robots to literally enter people’s lives; it requires many other human-centered technologies—robotics, intention recognition and more. To sum up, robotics now requires multidisciplinary research and the integration of multiple technologies: many parties must work together to make it possible. Among the existing applications in the market, a “ChatBot” is one of the good examples in my opinion. Although a “ChatBot” does not have to have a physical existence, it interacts with people through speech recognition and deep learning, and it is what I believe a pioneering application of social robotics. If we were to further integrate the ChatBot idea with image recognition and other areas of AI technology fully developed, such technology is highly likely to influence people’s lives through human-like interactions.
 
4. What do you think is the most significant current application based on machine vision and AI technology?
 
   The 2 most significant uses of AI technology today are image recognition and speech recognition. For example, some countries utilize image recognition technology for security purposes. They have installed an enormous number of cameras that can rapidly track, read and recognize people’s faces. The underlying concepts are nothing new; however, the image recognition accuracy has been increased substantially by leveraging big data and deep learning technologies. Another very important topic currently is the autonomous car. By installing cameras and other sensors on a car, the car can identify the road even under changing lighting conditions rapidly and accurately using image recognition techniques and big data, and it can learn to drive autonomously even in bad weather or poor lighting conditions. AI technology has provided a critical breakthrough for the development of autonomous cars, which can potentially improve human life from a safety viewpoint. In addition, in the medical field, medical images acquired through techniques such as MRI, CT, ultrasound and so on have long been important in diagnosing diseases. Nevertheless, the growth in medical imagery has significantly outpaced the growth of human medical resources; therefore, we must rely on AI technology through big data learning and training. As this technology has progressed, it is increasingly capable of diagnosing diseases such as Alzheimer’s, tumors, mental diseases and many others ever more rapidly and accurately. These examples all show that we have already developed some very exciting abilities enabled by AI-based image recognition technologies. In the future, as the technology matures, I believe there will be far more exciting applications.
 
5. How do AI and industrial robots accelerate “Industry 4.0”? What opportunities can Taiwan’s industry benefit from?
 
   Based on the statistics provided by the International Federation of Robotics (IFR), the demand for industrial robots has continued to grow worldwide. Although the growth rate for industrial robots is not as high as that for service robots, this opportunity is still something that Taiwan should not miss. The engineering and technical talents in Taiwan are highly competitive even at the global level, and we should make a greater effort to identify talent and build our competitive edge based on the assets we have. In recent years, I have seen a few Taiwanese industrial robotics companies that have dedicated themselves to this area; however, to be able to compete with the countries leading in robotics, we must integrate our industrial robots with the key technologies of AI and Industry 4.0. Our industrial robots must meet the standards of “Industry 4.0”: open data, transparent production lines, self-maintenance, etc. Additionally, the robots must be integrated with AI-embedded sensing technologies to give them vision and touch capability so they can go beyond the narrow definition of “pure manufacturing robots” and be able to further interact with the environment. If we can accomplish those tasks, our robots are likely to be competitive in the global arena.
 
6. What opportunities does Taiwan have with regard to the AI trend?
 
   Our government has paid very close attention to the AI trend. In my opinion, the rise of AI represents a simultaneous restructuring and reallocation of global technology competencies. It is not easy for Taiwan to compete with other big countries in terms of market or capital size; however, in the highly competitive computer science and electrical engineering fields, talent has always been one of Taiwan’s biggest assets—and one we can be proud of. If we can seize this great opportunity, dedicate our talents and resources to this area and continue to attract even more young talent, I truly believe that the industries in Taiwan will have opportunities to be even more competitive globally. For example, in the past, we developed some cutting-edge medical technologies. If we were to integrate these with AI technology, it would be possible to excel internationally again. The manufacturing industry in Taiwan is also globally renowned and includes outstanding companies such as TSMC. If we can capitalize on AI technology to advance our existing capabilities, I also believe our manufacturing industry can be great again. Just as “education” was the way to advance one’s social class when I was a student; now, I believe, “AI” technology is Taiwan’s opportunity to change its global technological and economic status.
 
Advice for the younger generation:
 
   Robotics is a course of study I would introduce at the beginning of every school year at NTU. In this course, in addition to providing students with basic knowledge of robotics, I would ask them to complete a project by the end of the semester. There would be no specific topic for this project; my goal would simply be to get the students to think and imagine. To help the brainstorming process, I would show them some videos of the latest robotic trends. The idea I want to convey through this project is that there are no clear boundaries in robotics; the boundaries can be pushed as far as your imagination can go. I would like to encourage my students—and the entire younger generation— whatever field or profession one decides to pursue in the future, robotics technology can always be applied to facilitate or even accomplish that work. My expectation for the younger generation is truly, to “use your imaginations, think big and bold; together we will be able to create a greater future with robots and human beings working and living together.”
 
Latest Research Focuses
 
   Professor Fu has been dedicated to research into intelligent robots, computer vision and smart home technologies coupled with artificial intelligence for decades. His research results and latest developments are introduced below.
 
A. Research on Intelligent Robots
 
   The goal of the Intelligent Robot group is to make robots capable of providing services in real-world environments and improving interactions between robots and humans. Research on mobile robots has gained considerable attention since breakthroughs of both algorithms and hardware. Based on these recent developments, we want to push the technology even further and make robots more intelligent and able to address real-world problems. To successfully introduce robots to daily living environments in the near future, they must have the ability to interact with people. Consequently, our recent research has focused on Human-Robot-Interaction (HRI) and AI with the goal of enabling robots to understand human intentions and infer appropriate reactions.
 
   Currently, the Intelligent Robot group mainly concentrates on the following research areas:
1. Robot Vision - The goal is to obtain rich information from camera images. The research topics in this area include A) object recognition, B) tracking/gesture recognition/human detection/face recognition, and C) visual SLAM.
2. Mobile Robot Systems - The goal is to be able to control robot actions to meet real-world practical demands. The research topics in this area include A) control systems for robot navigation/behavior control/path planning, and B) mapping and localization.
3. Human Robot Interaction – The research topics in this area include A) knowledge-based reasoning, B) reinforcement learning.
 
Our latest research results and contributions to the field of Intelligent Robot are summarized below:
 
l Map-less Indoor Visual Navigation based on Image Embedding and Deep Reinforcement Learning
 
   We propose a novel structure in which the objective is to achieve large-scale environmental navigation in an indoor environment without requiring a preconstructed map. Large-scale indoor environments require a good understanding of complex spatial perception, especially when the indoor space includes many walls, doors and other features that might occlude the robot’s view. Using the proposed hierarchical deep reinforcement learning and image embedding space techniques generated by an autoencoder, our method can achieve large-scale indoor visual navigation without requiring prior map information or human instruction.
 
l Hybrid Interactive Reinforcement Learning-based Assistive Robot Action Planner for Child Emotional Support
 
   Socially Assistive Robotics (SAR) is an emerging multidisciplinary field of study that has potential applications in fields such as education, health management and elder care. We propose an interactive decision-planning module developed for RoBoHoN whose goal is to explore the feasibility of using a robot to engage with children. To validate the usefulness of the proposed methodology, we conducted a pilot study with healthy elementary school-aged children. Based on the results, our platform has the potential to be implemented using the developed Wizard-of-Oz interface in real environments (e.g., classrooms or hospitals).
 
l HRI (human-robot-interaction) with dialogue based on hybrid media
 
   To provide psychological support and physical company to elderly people and children in everyday life, the robots should be proactive enough to initiate the most appropriate topics for phatic dialogue based on ambient contexts. Moreover, the robots should also encourage elderly people and children to continue the conversation to create more opportunities for them to express themselves well or to improve their memory and their organizational abilities. Take an older person for example; the robot can select conversation topics among previous events that occurred to that person, for example, by helping them reminisce about old photos. The hope is that such interactive discussions about people, events, times, places and things relevant to that individual will encourage older people to share their past experiences as with a friend, which apparently is very useful for augmenting elderly people’s cognitive abilities.
 
l Learning Adaptive Behaviors: Social Robot Navigation with Composite Reinforcement Learning
 
   Recently, deep reinforcement learning (DRL) has been applied to the robotics field. For service robots, we propose a composite reinforcement learning (CRL) system that provides a framework for using the sensor input to learn how to determine an appropriate robot velocity. The system uses DRL to learn the velocity in a given set of scenarios and a reward update module provides ways to update the reward function based on human feedback. A CRL system is able to incrementally learn to determine an appropriate velocity through a set of given rules. Additionally, it collects human feedback to continually synchronize the system reward functions with current social norms.
 
l Learning-based Shared Control for A Smart Walker with a Multimodal Interface
 
   A walking-assistant robot is an assistive device to enable safe, stable and efficient locomotion for elderly or disabled individuals. We propose a learning-based shared control system with a multimodal interface that features both cognitive human-robot interaction (HRI) for gait analysis and traditional physical HRI for measuring the user’s exerted force. The interface extracts navigation intentions using a novel neural network-based method that combines features from (i) a depth camera to estimate the kinematics of the user’s legs and infer when the user’s orientation deviates from the robot’s velocity direction and (ii) force sensors to measure the physical interactions between the human’s hands and the robotic walker.
 
   To ensure the robot’s ability to autonomously adapt to different users’ operation preferences and motor abilities, we propose a reinforcement learning-based shared control algorithm that not only improves user comfort during device usage but also adapts to the user’s behavior automatically.
 
B. Research on Computer Vision and Virtual Reality:
 
   Perceptual ability has played a major role in the evolution of most creatures. For human beings, vision and related abilities give us advantages in almost every aspect—and they still shape our society today.
 
   As a researcher of robotics and automatic control systems, the goal is always to make the world a safer and better place. To achieve that, we want to enable robots and machines to understand the world. Perception is the most straightforward means to accomplish that, but giving machines human-like perceptual capabilities is still challenging. In the last decade, our team has constructed a variety of visual systems that address different problems, including robot applications, visual servoing, object tracking and human-machine interfaces. Moreover, since the rise of AI and deep learning, we have extended our method from traditional machine learning with designed features to the primary deep learning-based approaches. In recent years, to apply deep learning to classic computer vision problems, we have mainly focused on 3 aspects: human-centric recognition, advanced driver assistance systems (ADAS) and VR/AR system, each of which is explained separately below.
 
l Human-Centric Recognition
 
   It has been several decades since the baby-boom era, and Taiwan is now facing the issue of an aging society. Making this issue even tougher is the fact that the number of caregivers is far less than the demand for at-home or in-facility elderly care. To address this issue and alleviate the caregiver overload, we are developing a care-assistive surveillance system. In this system, several depth sensors are installed on the ceilings of indoor environments and they monitor from above. By employing depth sensors rather than RGB cameras, we avoid excessive intrusion of older people’s privacy as well as any disturbances in their daily lives. Unlike traditional RGB cameras, a depth sensor obtains only a 3D human profile, not a color image.
 
   From this surveillance sensor network, we propose a number of algorithms to detect, track and identify humans in their daily living environments. This type of surveillance network can not only help us understand where individuals are but also help us infer more information about them. For example, in eldercare, daily activity is the most important information to know but the most difficult to observe. It requires a huge amount of caregiver time to monitor and record the activities of the elderly. Instead, we propose using an algorithm to perform activity detection. In contrast to traditional action recognition, which requires predefined starting and ending times, our method relaxes that need through an approach that detects activity automatically. The method’s recognition capability stems from machine learning, which learns appropriate features from the 3D depth profile and the temporal sequence. It turns out that such methods can identify actions in any collected offline video or even during real-time surveillance. Our method shows that a machine can not only understand what action a person is taking but also when it is taking place. Through our activity detection algorithm, the activities of the elderly can be monitored and recorded more accurately and comprehensively than is possible through manual recording. Furthermore, an activity log could be made available to doctors or caregivers for analysis to catch dementia, depression, or other degenerative conditions early by identifying changes in people’s activity patterns. Of course, we could also investigate applications in which warning signals are sent whenever some dangerous state is detected. For example, we can detect when an older person falls or gets out of bed, and the system can warn care-givers, allowing them to provide timely assistance. Overall, to minimize the impact of societal aging and alleviate caregiver overload, our human-centric care-assistive surveillance system can provide both real-time observations and data logs, and it can help alert caregivers when necessary.
 
l Advanced Driver Assistance Systems (ADAS): Using computer vision to detect road obstacles
 
   The objective of ADAS is to develop forward object detection and target object tracking using deep learning through a convolutional neural network (CNN). We have proposed several novel deep learning frameworks such as Deep Convex-NMF Net (DC-Net) and SceneNet. The former is a novel deep learning framework that incorporates the proposed Deep Convex-Non-negative Matrix Factorization (DC-NMF) technique to process camera images for obstacle detection, whereas the goal of the latter is to perform effective and accurate object detection by combining both object and scene information after end-to-end training with datasets containing abundant scenes and multiscale objects. Thus far, these works have achieved superb performances compared with the state-of-the-art methods.
 
l VR (human-computer interaction)
 
   3D hand pose estimation has been a hot research topic in recent years. Hand pose estimation is widely used in many advanced VR and HRI applications because it provides a natural interface for communications between humans and cyberspace that achieves a wonderful user experience. In this research field, we aim to build a 3D hand pose estimation system that can correctly detect a human hand and accurately estimate its pose. To guarantee system robustness, we design a hand model called the spherical part model (SPM) and train a deep convolutional neural network using this model. The results show that the system can detect human hands with an average precision that reaches nearly 90% and achieves an average error distance of approximately 10 millimeters in pose estimation. This result substantially outperforms other state-of-the-art approaches. We also proposed a hybrid method for training a deep learning model and hierarchical refinement for hand pose estimation in 3D space using depth images—a so-called skeleton-difference layer that a convolutional neural network (CNN) training process can use to effectively learn the shape of a hand as well as its physical constraints. These results are attributable to the method’s ability to consider physical joint relationships, which in turn, makes the estimated hand poses quite natural and complete.
 
C. Research on Smart Home
 
   In addition to smart home technologies themselves, the smart home group focuses on two specific research fields: health care and energy saving. Since 2005, we have expended extra effort on the field of smart homes. Here, we aspire to enhance a home’s intelligence by leveraging information and communication technologies (ICT) and the Internet of Things (IoT). We also pioneered the idea of the “attentive home,” by arguing that a smart environment should include a service-driven architecture based on the Open Services Gateway initiative (OSGi) and mobile agents, a message-driven supporting model and middleware, and a reliable service-management protocol capable of self-repair and flexible management. This idea provides a all-around service management platform for smart homes. Moreover, from the human-machine interaction point of view, we came up with a framework that takes “users,” “environments,” and “services” into consideration to satisfy “comfort,” “convenience,” and “safety” as a design principle for interactive services in smart homes.
 
   To serve each family member individually, we also proposed an indoor tracking algorithm that uses pressure sensors installed underneath the floor together with an activity recognition model based on the identified users’ locations and numerous wireless sensors.
 
   We are also endeavoring to develop a nonintrusive wireless sensor network system to monitor patients’ lifestyles in hospital wards after cancer treatments to reduce the burden on the nursing staff. Moreover, a Home-care Guideline Execution Engine (H-GLEE) was developed based on a wireless sensor network and the Guideline Interchange Format (GLIF). This engine is capable of monitoring the status of elderly people and detecting possible health risks. Consequently, it can serve as a tool for telemedicine. In addition, cloud-computing-related technologies are applied to address the big data from the sensor networks and electrical health records to make the system more efficient.
 
   Because Alzheimer’s disease and other types of dementia are becoming one of the most serious global issues of this century, we proposed a system that helps caregivers recognize an elderly person’s Activities of Daily Living (ADL) in their homes. This system incorporates both ambient sensors deployed in the environment and wearable sensors. It utilizes clustering and supervised learning approaches to classify the actions. Monitoring can occur in real time. Moreover, the system can also discover new or anomalous activities .
 
   Lacking a cure, early disease detection to provide the best intervention is crucial. Currently, it is inconvenient and time-consuming for doctors to diagnose whether elders who live independently have dementia because they must first ask the individual to answer numerous diagnostic questions on a checklist; in fact, some diagnoses require long-term observation. To help doctors and simplify the diagnostic process, we proposed a supporting system that can quickly screen elderly people and, based on a behavioral test, estimate their likelihood of having dementia in only 2 to 4 hours. During the behavioral test, the elderly need only to perform some selected activities from the so-called Instrumental Activities of Daily Living (IADL) in a smart home environment. Then, a supervised machine learning algorithm conducts the classification based on our proposed features extracted from motion sensors deployed in the smart home environment.
 
   In recent years, energy issues have attracted considerable attention. In our group, we focus on enhancing energy efficiency through demand-side management. Demand-side management is a new method that has recently arisen. The main concept involves changing people’s habits from the demand side to reduce the demands of peak loads. Using real-time energy pricing, users can reduce their energy costs, and utilities can reduce their costs for generating energy. In our previous research, we first focused on the single home scenario. We presented a household energy saving system, called the M2 M-based Context-aware Home Energy Saving System (M-CHESS) (see Fig. 1), that does not sacrifice residents’ comfort The system comprises three main engines: an Energy-Responsive Context Inference Engine (ERCIE), a User Comfort Evaluation Engine (UCEE), and an Energy Saving Decision Support Engine (ESDSE).
 
Figure 1. System overview of M-CHESS
 
   The ERCIE infers the relationship between an ongoing activity and its corresponding home appliances, that is, the energy-responsive contexts (ERCs) in a multiresident environment. UCEE is responsible for evaluating the resident’s comfort during each activity, and ESDSE optimizes the states of home appliances through a combination of the comfort constraint resulting from UCEE and the inferred ERCs from ERCIE. The presented system is not only theoretically sound but also highly acclaimed by Intel. We conducted long-term single-user and short-term multiuser experiments to verify the effectiveness of M-CHESS and the total energy saved was 36.76% when applying only ERCIE and ignoring the residents’ comfort. However, after adding the constraints for the residents’ comfort in terms of lighting and their overall comfort, the system was still able to achieve energy savings of 31.57% and 28.98%, respectively. These experimental results showed that M-CHESS is capable of maximizing energy savings without sacrificing residents’ comfort.
 
   After the single home scenario, we conducted further research using a community scenario. The community scenario is more complicated and more difficult to solve. We began by addressing several important problems that arise in community situations. First, we addressed privacy issues through a “secret sum “algorithm, which can prevent a user’s information from being revealed to the community. Second, we sped up the PSO by proposing a new version of PSO that we named improved PSO. We added a convergence test was to ordinary PSO, and the fitness value variance also acts as a termination criterion to end the process if every particle has a close fitness value. Using the IPSO, we require only 1 or 2 seconds to obtain an optimal solution compared to the original algorithm. The last part involving a plug-in electric vehicle (PEV) has become a prominent study area in recent years. A PEV provides benefits to a smart grid system because the large battery capacity can provide good-load shifting ability. We have combined PEV integration, local energy generation systems and local energy storage systems into one system to further reduce the energy cost and smooth peak loads.
 
   In future research, we plan to focus on integrating the different types of users into our system. We have an ongoing study regarding the hybrid community. Another research direction concerns load anomaly detection. We plan to use deep learning methods to perform load forecasting and detect anomalous situations.
 
   Words contributed by Bryant Chen, Victoria Lin and the Intelligent Robot Lab, Department of Computer Science and Information Engineering (CSIE), National Taiwan University. Feel free to send your feedback or inquiries for collaboration to lichen@ntu.edu.tw.

LANDSCAPE

Keywords