“How can we solve transportation inequality?” An interview with Associate Professor Hidekazu Suzuki.

In recent years, there has been an increase in news about car accidents involving elderly drivers. While they are encouraged to return their driver’s licenses, living without an alternative means of transportation can be difficult, especially for those living in rural areas or those with limited mobility. In this context, the concept of “MaaS (Mobility as a Service)”—which seamlessly connects various modes of transportation such as trains, buses, taxis, rental cars, and bike-sharing for route searches, reservations, and payments—has been gaining attention. It is essential to maintain bus services and improve their convenience, especially for the last mile from the station to home. However, how can we provide information to generations that are not adept at using smartphones? We asked Associate Professor Hidekazu Suzuki for his insights.

 

The installation of smart bus stops fully utilizing IoT and ICT technologies

MaaS has become a frequently discussed topic in Japan over the past few years. Currently, payment and reservations for trains and buses are handled separately by each operator, requiring the use of different applications. However, with the implementation of MaaS, it would be possible to perform transfers, payments, and reservations across multiple operators through a single service, making it very convenient for users. Nevertheless, unifying these services is challenging because each company creates schedules and location information in different formats, and bus operators, in particular, often need more organized information.

To address this issue, an initiative to introduce the world’s standard bus information format “GTFS” in Japan began around 2016. Private companies operating bus services nationwide, municipalities and university researchers are working together to develop the Japanese version of GTFS (GTFS-JP) data. I am also collaborating with local governments in Aichi Prefecture to work on data organization.

Now, it has become easier to search for bus routes and transfer information using Google Maps and other services. This improvement is due to the organization of GTFS-JP data and its integration into Google’s system. Real-time information is also available, allowing users to know how many minutes a bus will be delayed and the timing of the next connecting train.

Bus operators are developing bus location systems to monitor their operations. At major terminal bus stops, displays show information such as the next arriving bus and any delays. However, most bus stops only have timetables posted. Therefore, in my laboratory, we are working on the research and development of smart bus stops that fully utilize the IoT (Internet of Things) and ICT (Information and Communication Technology) in collaboration with the city of Nisshin, which operates that community’s “Kururin Bus.”

For bus stops located in rural or suburban areas, one challenge is the difficulty of installing power supply systems to display real-time information. To address this, we developed bus stops using energy-efficient electronic paper powered solely by solar energy. The display shows announcements about bus operations from the local government and disaster information, making it useful not only as a bus stop but also as an information dissemination point in the community. Despite the widespread use of smartphones, many elderly people still cannot use them. Especially during disasters, it’s crucial to be able to receive real-time information, and we hope to widely implement these smart bus stops to achieve that.


Feeling Connected to Society Through Research

The bus initiatives in Nisshin City stemmed from discussions with Professor Yukimasa Matsumoto from the Department of Civil Engineering and Architecture, starting about ten years ago. The actual systems are developed by students, allowing them to experience firsthand how their research and activities directly impact society.

My expertise lies in computer networks and ubiquitous computing. I’ve always been interested in using this technology and knowledge to solve everyday problems. The development of smart bus stops and bus location systems originated from the idea of applying IoT technology. There are many societal issues that ICT technology can address, and recognizing these opportunities can lead to significant innovations. Staying informed about the world and collaborating with people from other fields to leverage each other’s knowledge and technology for co-creation is crucial. I encourage students to take an interest in various fields beyond information engineering, engage with the broader world, and participate in diverse activities.

Interview Date: April 14, 2021

 

“What kind of scent makes a virtual space feel real?” An interview with Professor Yasuyuki Yanagida.

If technologies that deliver distant objects are television and telephone, then technologies that make virtual spaces feel real can be called virtual reality (VR). Generally, you can connect to virtual spaces using goggle-type head-mounted displays, which primarily engage your sight and hearing. However, in recent years, 4D theaters have increased, adding movement, scents, wind, and water elements to the visual and auditory experience. Ericsson in Sweden has released a research report anticipating the realization of services linked to sight, hearing, taste, smell, and touch by 2030. Professor Yasuyuki Yanagida discusses these new VR technologies that make virtual worlds feel real.

 

Controlling Scents to Create the Atmosphere of a Place

In 1968, Ivan Sutherland in the United States developed the world’s first head-mounted display. Since that invention, various research institutions have begun working on VR, and now VR, which presents audiovisual experiences, has transitioned into industrial fields. In the future, in addition to hardware, both software and content will become increasingly important. Moreover, research in VR focusing on two types of sensory perception—force feedback, which allows us to feel weight and pressure, and tactile feedback, which lets us perceive surface textures like smoothness or roughness—has been progressing from early on. However, even with these advancements, we still can’t fully replicate the feeling of being present in a location—the “atmosphere” of the place. It feels as if we are experiencing VR through a spacesuit. Therefore, I believe that scents will play a crucial role in making people feel truly present, and I am advancing research in VR that appeals to the sense of smell.

The challenge with scents lies in the current inability to synthesize them effectively. In the case of vision, humans have three types of cones corresponding to red, blue, and yellow light, and by adjusting the balance of light wavelengths, any color can be represented.
However, for smell, it is said that humans have about 400 types of olfactory receptors, and each receptor can respond to multiple odor molecules. This complexity makes it difficult to create a device that can produce “any scent.” Nevertheless, as research into the mechanisms of olfactory perception is progressing rapidly, it is possible that an efficient method for coding scents will be discovered in the near future, enabling the generation of a wide range of odors.

As a “VR specialist,” I am not focused on the synthesis of scents but on “how to control scents temporally and spatially.” There are various methods for presenting scents. For example, attaching a scent generator to a head-mounted display is a classic approach. Another method involves attractions in theme parks that synchronize scents with visuals, using large-scale equipment to introduce and then quickly withdraw the scent. In contrast, I am exploring how to efficiently deliver a minimal amount of scent directly to the nose. My research aims to present scents briefly and locally without requiring the user to wear a device or rely on large equipment. One of the solutions I conceived is using an “air cannon.” Many people have seen air cannons used in science classes, where a puff of air is propelled forcefully. I began researching how this could be utilized to present scents indoors to a specific person.


To provide experiences from different worlds, overcoming the constraints of time and place.

I believe there are various ways to utilize VR with scents. For example, it could be used as an advertising tool. Think of the enticing aroma wafting from an eel restaurant, which acts as a form of advertisement by drawing in passersby with the smell of grilled eel. We could do something similar on an individual basis. In shopping malls, sensors could identify the attributes of people passing by and subtly introduce corresponding scents. This kind of application might not be too far off in the future.
There are also attempts to add scent generators to smartphones. While this technology may not become widespread immediately, it will likely see more use as the know-how accumulates and development progresses.

Expanding VR to the five senses is one of my major goals. I aim to advance research by examining the interplay between “sight,” “hearing,” “touch,” “smell,” and “taste.” As urban dwellers, we are inundated with information via electronic media, leading to fewer tangible experiences. Although VR is an electronic medium, my goal is to develop technologies that offer immersive experiences beyond just visual and auditory stimuli, free from the constraints of time and place.

Interview Date: April 14, 2021

 

“How do they know what products you like?” Interview with Associate Professor Yoshitaka Kameya.

Online shopping sites, video streaming services, and music streaming platforms often seem to know your preferences and recommend products or content tailored to your tastes. How is this possible? These “recommendation systems” possess large amounts of data and use your selections to identify correlations and suggest items or content that match your interests. What kind of technology enables this?
Professor Yoshitaka Kameya provides an explanation.

 

Based on music reviews, recommending songs that match your preferences.

Recommendation systems are now ubiquitous in music and for books, clothing, real estate properties, and even friends, suggesting personalized options for us in various domains. The mechanism behind these systems is relatively straightforward, relying on machine learning techniques to predict whether a user is likely to purchase an item based on their purchase history, the content of purchased items, and demographic information such as age and gender. Two methods commonly used in these predictions are “item-based collaborative filtering” and “user-based collaborative filtering.” Item-based collaborative filtering identifies and recommends items similar to the ones currently being viewed by the user, and this approach is reportedly employed by some of the world’s leading e-commerce companies. In contrast, user-based collaborative filtering identifies users in the database who have similar purchase histories or attributes to the current user and recommends products those similar users have purchased. While both systems are relatively straightforward, they require vast data processing.

My lab uses these techniques to build a music recommendation system. We leverage CD databases that include song comments provided by music publishers as part of the decision-making process. Today, people can easily enjoy music on platforms like YouTube, but as a student, I would check each POP comment in record stores and imagine what the album might be like before purchasing it. From this experience, I thought it might be possible to recommend songs using recommendation comments and review texts.

In this system, users can input a song they like, and the system will reference reviews from a database containing information on approximately 50,000 songs. It will search for songs that are described using the same words or that have a similar impression. Users can decide and set whether to prioritize lyrics or music first. Based on this setting, the system will recommend songs with similar lyrics content, and mood, or those with a similar melody and tempo. Currently, the system suggests a few songs, but we aim to create playlists that recommend multiple songs as part of a meaningful story, providing a valuable music experience.


Applying Big Data Analysis to the Medical Field

Data analysis can be applied in various fields. For example, some students use statistical methods to analyze sports data under the keyword “data analysis.” In contrast, others employ a method called “cluster analysis,” which finds similar data groups within large datasets to analyze shoppers’ behavior and contribute to revitalizing shopping districts.

Currently, I am researching the application of big data analysis for pattern discovery in the medical field. Recently, polypharmacy, where multiple medications prescribed by different medical institutions and departments lead to adverse effects, has become a concern. This situation can lead to a “prescription cascade,” where side effects are mistaken for new symptoms, resulting in further medication prescriptions. Certain combinations of drugs can cause hypotension, leading to dizziness, falls, and even permanent bedridden states. Therefore, in collaboration with the National Center for Geriatrics and Gerontology, we are analyzing past case data to identify patterns in drug combinations that lead to hypotension. By allowing pharmacists to review prescriptions, we aim to prevent polypharmacy and contribute to the appropriate use of medications.

Interview Date: February 13, 2021

 

“Is AI Really Safe?” An interview with Professor Masaya Yoshikawa.

Since the 2010s, artificial intelligence (AI) has made remarkable progress, coming to support all aspects of our daily lives. It is now pre-installed in household appliances and smartphones, and in recent years, it has been utilized in autonomous driving. But is AI truly safe? In 2023, conversational AI (or chatbots) advanced significantly, posing a threat to human intelligence. This has become an international issue not only as existing jobs may disappear but also because AI could potentially endanger human lives. In this interview, we spoke with Professor Masaya Yoshikawa about “AI and Security.”

 

Four Security Challenges Related to AI

Currently, there are four major issues concerning AI and security. One is the ability to “deceive AI.” For example, the cameras installed in self-driving cars are supposed to recognize stop signs and bring the vehicles to a halt. However, it has been found that a slightly altered stop sign can look normal to the human eye but unrecognizable to a self-driving car, thus risking collisions with oncoming traffic. Moreover, self-driving cars use “distance sensors” to measure distances between vehicles. These sensors calculate distance by emitting laser beams and measuring the time it takes for them to bounce back from various objects. If someone introduces alternate beams, this can affect the measuring of distance. Tampering with external data required for AI’s decision making has become a major issue.

The second issue is protecting AI from side-channel attacks. These involve the use of such physical information as power consumption, electromagnetic radiation, and processing time to infer internal data and cryptographic keys. To prevent side-channel attacks, it is necessary to decouple this physical information from AI’s internal data.

The third issue is protecting AI’s training data. It is said that one can infer the data that trained an AI by analyzing its judgments. Since AI is used in the medical field, where it handles patients’ medical records and other highly sensitive data, there is the risk of a data breach of personal and private information. Therefore, it is crucial to ensure that AI’s training data cannot be inferred.

The fourth issue is preventing the contamination of training data. Using a method called “poisoning,” a hacker can deliberately corrupt training data to make AI produce incorrect judgments: for instance, by recognizing “A” instead of “B.” Numerous instances of “poisoning” have been reported.


Protecting AI from malicious attackers.

In the future, various types of AI will be used in numerous fields and involve combinations of technologies, such as “AI × 〇〇 × □□.” Drawing on my expertise in hardware security, I am currently researching how to protect AI from malicious attackers by integrating “AI × security” and “AI × security × hardware.” Although this field is still in its early stages, I believe it is crucial for AI’s real-world applications.

As various jobs shift to AI, we must consider its trustworthiness. For instance, AI used in autonomous driving—and in facial recognition systems for detecting suspicious individuals—directly impacts human lives. Should problems arise, there might be social unrest. Moreover, hackers could extract valuable information without being detected.

Hackers exploit every possible angle to target the most vulnerable points. As long as there are ruleless and hostile actors, security challenges remain. Still, I believe it is the mission of our university to think five-to-ten years ahead to ensure safety and security.

Interview Date: January 22, 2021