Nite Knight

Your Personal Protection Robot

The Nite Knight project aims to create a solution for those that feel unsafe on walks around their neighborhood. Our team invented an assistant robot that provides a calming presence, quick access to emergency services, and features to allow loved ones to keep tabs on the user. The design of Nite Knight considers potential user personas, user centered UI/UX, current and future technological possibilities, and the future of artificial intelligence. Continue reading below to find out more about Nite Knight!

The Team: Ed Lai, Lynna Ye, Maria Decelles, Matt Ebisu, Sam Bruce 

Background

Pedestrian safety is a complex and evolving problem. Regional, cultural, and economic differences worldwide all play into the individual's perception of safety. In America, only 37% of Americans feel safe while taking walks near their homes1. This number is dramatically worse for women. In the UK, similar statistics have been recorded where 37% of women and 13% of men feel unsafe while walking near their homes2. These factors of safety perception are mainly from the threat of crime; however, other safety implications are on the rise. As the prevalence of smartphones increases, situational awareness is declining and increasing the risk of pedestrian physical safety3. These issues are extremely important for a functioning and happy society. Not only does day-to-day satisfaction get impacted by perception of safety, overall health and well-being can be affected. Those who feel unsafe tend to get less daily movement and exercise4

This project is for a social robot that will act as a companion for travel late at night or for elderly or adolescent users. The system will use technologies such as GPS to navigate, facial recognition analysis to understand the user's emotions, and machine learning to recognize potential harm in the nearby vicinity. 

The scope of this project is the ideation and some prototype development of Nite Knight. This project will include a user interface design and prototype, high level design of interface and structural elements, and information architecture strategy.


1.https://news.gallup.com/poll/179558/not-feel-safe-walking-night-near-home.aspx 

2. https://blog.gitnux.com/walking-alone-at-night-statistics/ 

3. https://www.sciencedirect.com/science/article/pii/S0167739X22001170#sec9 

4. https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040306 

Users

The intended user group for this product is mainly elderly and adolescents, or individuals who wish to have extra protection and the monitoring of their loved ones while walking to different destinations. Below are a set of user personas that outline our user group. 

 Karen Rodrigo, Age: 70 ​


Ming Lee, Age: 68 


Jacob Pratt, Age: 45


Tommy Miller, Age: 12​


James Gator,  Age: 22


Task Analysis

To begin our design, a task analysis was done examining the typical user and what steps they might take to walk from one destination to another. This analysis was completed with main tasks as well as sub tasks. Taking it further, our team began a task classification by deciding what types of information were needed for ever user action and decision. Finally, we landed upon an automation strategy based on a thorough task allocation study and subsequent design of the levels of automation. 

System Design and User Experience

With our users identified and our automation strategy decided upon, our team was able to begin the design of our system. The system will function as a combination of a phone-based application and robotic mobile companion. The phone-based application would be used by the guardian to control the robot's function and monitor the companion user virtually. The robot can also function without the phone control, as it will also act as a social robot it will have facial and emotive displays. Some of the main features of the system would include:


5. https://doi.org/10.3390/su12083133 

6.https://doi.org/10.3390/ijerph19031383 

In discussing Nite Knight functionality, we decided to create three flow charts, one for each mode of operation the robot would perform. These three modes are Friendly Guide Mode, Social Mode, and Protective Mode. To accompany our robot's functionality charts, our team designed and sketched the physical portion of Nite Knight. Below you can see these flow charts and corresponding physical features that enable Nite Knight's functionality. An additional chart summarizing the user interactions with the Nite Knight system can be found at the end of this section. 

Friendly Guide Mode

Friendly Guide Mode is Nite Kights default mode. It aims to safely guide the user to their desired destination by following a preset path determined by GPS and informed by local crime and traffic mapping data. The system will also make recommendations for travel based on projected weather forecast. As the user walks to their destination, the robot handles three main tasks:

Social Mode

Social Mode can be activated as a subroutine while in Friendly Guide Mode. When the user talks to Nite Knight, it responds the user in a friendly and compassionate way. The robot can engage in casual conversation, respond emotively via its front screen, and provide information on a range of topics the user may ask. The purpose of this mode is to solidify Nite Knight as a companion to the user, rather than a safety tool. 

Protective Mode

Protective Mode is engaged when the robot's AI detects danger (more on this below). When a threat is detected, audible and visual alerts are projected to the user with recommendations on how to proceed to safety. At the same time, Guardians are notified and emergency responders are called. 


Summary of User Interactions and Alerts

Companion App

The Nite Knight system will have a companion app that allows the user to interact with and set up their robot. This app will also allow Guardians to keep track of their loved ones using Nite Knight by feeding them location data and providing insight into their walk. A Guardian is anyone designated by the user to be allowed to access their Nite Knight information - typically loved ones, parents, etc. The video below gives an idea of what the Guardian experience on a mobile device might look like. 

Physical Design Inspiration

When envisioning the design of our protectorate social robot, we drew inspiration from the Umbrella Movement, a pivotal event in the 2014 Hong Kong democracy protests. Amidst the tumultuous clashes between the police and civilians, protestors embraced peaceful methods of self-defense, employing umbrellas as shields against pepper spray and tear gas. Bryan Druzin, an assistant professor at the Chinese University of Hong Kong, noted that the umbrella not only served as a functional shield but also held a deep symbolic significance as a representation of passive resistance. 

The design philosophy behind our escort protector robot, Nite Knight, intentionally echoes this emblem of nonviolent resistance against threats and aggression. The robot's primary focus is to provide security and companionship, without any weaponry or projectiles. Instead, it boasts a collapsible transparent shield crafted from industrial-strength bulletproof Kevlar. This innovative shield, extending up to 6 feet tall and 4.5 feet in diameter, unfolds from its omnidirectional base. When faced with potential danger, the robot can autonomously deploy its shield, adjusting its angle across a full 360-degree radius to safeguard its human counterpart. 

AI Integration

Threat detection by Nite Night will take place with a trained AI. As with all AI models, the Nite Knight AI will need a set of quality training data in order to recognize and accurately identify threats. This training can be broken down based on our available inputs: video data and audio data. For video training, our designers need to make sure to train the AI with sufficient amounts of data on each harm the system should detect. For example, training videos must include large amounts of data on aggressive humans as well as aggressive animals. Audible training will be similar, where each hazard or alert noise will need to be trained sufficiently with enough examples of each.  

The main body of information for aggressive creature identification will be police body cameras. This source provides a consolidated place to find video of public danger, without having to sort through unfiltered internet videos or create large amounts of training data from scratch. As our device is aimed to be used in higher population density areas, this training data will be ideal as it includes all types of living creature-based hazards our users might encounter. This data will need to be curated to avoid bias shown by law enforcement, and the AI will need adjustments and testing to confirm there is no bias in threat detection. This topic is addressed below in the ethical implications section.  

In addition to body camera footage, our team will incorporate targeted video training based on research into body language indicative of aggressive behavior. With Google AI’s Teachable Machine, our team was able to generate a demonstration of how this might work. With a training set of data aimed at identifying several different poses, the AI model can identify aggressive poses in real time. The video below demonstrates this AI's functionality. It was trained using several short videos of the operator performing different friendly and aggressive stances. 

Learn more about teachable machine here!


Using this type of training, we are able to gather research on aggressive body language and create a set of desired data for the system to be trained with. For example, in the article, “Aggressive Body Language: 15 Cues and How to De-escalate", Vanessa Van Edwards outlines 15 cues of how people express aggression. While this list is not exhaustive, it is a good place to begin training an AI on both body language and facial expressions that could represent danger. 

 This list includes:  

In a similar vein, training for difficult or dangerous terrain can be trained from a pre-set list of conditions that are considered to be dangerous to pedestrians. Luckily, because terrain is less dynamic than a moving body, such as a human, terrain data can be trained with images. In a populated area, there is a discreet number of hazardous conditions that can present themselves. For example, uneven pavement, fallen or low trees, wet or icy pavement, or path blockage, can account for the majority of environmental based hazards someone might encounter. With that relatively limited list, image data can be acquired to cover each of these relatively easily.   

While acquiring and curating video data for AI training may be labor intensive, it may be slightly easier for audio training. Many sources of danger that are telegraphed audibly are already recorded, curated, and publicly available. With the onset of digital media, especially video games, audio file assets are very easy to come by. These are often searchable by name and easily acquired online. For example, audio files of all regulated police and emergency responder sirens can easily be found and downloaded with a few clicks. Other examples such as crowds shouting or reacting poorly to a situation are also easily found.  

Danger Decision

Once the AI is trained and able to recognize hazards to a high enough confidence interval, the difficult topic of “deciding danger” comes into effect. Despite being trained on a large data set, there will always be false positives and negatives. When designing this product, our team discussed the implications of the system identifying danger when there is none, and not identifying danger when there is. The consequences of each of these does harm. On one hand, if danger is implicated when there is none, the Nite Knight could be perceived as bias or reactionary. If it does not detect a danger, the user could be hurt. To mitigate both of these, we created a flow chart for how the system could identify and react to danger.   

This flow chart describes the way in which the system decides if there is danger and what action to take based on that decision. To lower the chance of false positives, the user is able to cancel a call to emergency responders at any time. To keep this feature safe, this confirmation is done without a positive affirmation of danger from the user. Rather than having the user confirm danger, the assumption is that danger is present and help will be called within a short time window. If there is no danger, the user will have no problem with a cancellation, however, if there is, they will get the help they need without needing to speak or give an additional action.   

Hardware and Software

In thinking about feasibility of this design, our team outlined some potential hardware paths that may provide useful in the construction of Nite Knight.


New User Walk Through

Karen age 70 retired but still lives a lively and active life and enjoys taking evening walks. Recently Karen has been fearful of walking at night especially since her partner passed away recently and she now lives alone. Her children live a few towns over. She and her children recently decided to purchase a Nite Knight in hopes of allowing Karen to go on walks independently at night with peace of mind her family would be alerted if an emergency were to occur. The follwing chart is the process Karen goes through to use the Nite Knight for the first time.

Discussion

Ethical, Societal, and Safety Implications  

An AI model can only be as good as the data that is fed to it. Our society contains bias and prejudice, and without proper care and consideration of this in our training data, it could make its way into the AI model. If our AI model, on an average, considered certain races or ethnicities “more dangerous”, our product would be a failure. To avoid this, in-depth research and focus on universal signs of aggression and danger would be the only data considered for the AI model.  

Our data aims to make individuals feel safer, but to do so, we use features that could cause risk to the user if fallen into the wrong hands. Learning from the issues with Apple’s Air Tag product, our location tracking will have the highest data security possible to ensure that user’s location data is only accessible to those chosen by the user.  

In discussing our product, our team felt strongly that this robot's design should not ostracize the user and make them feel embarrassed. The last thing our team wants is to create a product that people are ashamed to use. To tackle this, our robot has several modes varying in interaction level. Using our machine in the lowest interaction mode allows users to interact with others without coming off as unfriendly or strange. In this mode, the robot keeps itself quiet and small, following the user without obstructing the line of sight to strangers.  

Limitations

While machine learning has recently rocketed into the mainstream, it is still in its infancy. Powerful chat bots like ChatGPT and Jasper are becoming more powerful and knowledgeable every day, they still fall short of what human thought can produce. For Nite Knight, this implies that our danger detection AI may need several AI advances before it becomes feasible. While we demonstrated that a simplistic AI can detect obvious body language, it is unclear what technological improvements would be necessary to increase this resolution. The Nite Knight AI needs to be correct to high confidence when it alerts that there is danger. While the system can have safeguards to prevent false alarms, the mere presence of false alarms can erode confidence in the system.  

Nite Knight signifies a pioneering leap in automation, marking our entrance into the security product market. Selling a first-to-market product like this comes with its own set of challenges, particularly when it introduces the concept of incorporating an escort robot into daily life, which might not have been on the public's radar. Our team recognizes the significance of ensuring a seamless and inconspicuous rollout for Nite Knight. We've seen instances in other cities where attempts to introduce automated security robots have faced obstacles, leading to prematurely terminated pilot programs. For example, in 2021, the NYPD introduced Digidog, designed to investigate high-profile hostage areas, but unfortunately, it only lasted a mere 3 months due to a public outcry. The outcome highlights the need for a well-thought-out approach. When introducing Nite Knight, we'll focus on understanding the scope of the release, potentially starting with a controlled pilot program in select small neighborhoods before embarking on a broader rollout. 

To achieve the full integration of Nite Knight into society, we'll need support not only from customers but also from the city administration and law enforcement agencies. This aspect poses significant challenges, particularly given the current heavy regulation of autonomous robots in specific regions. However, it's essential for the project's long-term success and the safety of our customers that they're aware of the robot's connection to the police grid. In case of emergencies, the robot can promptly contact the necessary officers and authorities. Our vision includes a small-scale pilot program lasting several months, aiming to demonstrate the tangible benefits of enhanced safety when walking alongside our robot. A successful pilot could pave the way for a smoother process in convincing the city to embrace Nite Knight as a valuable assisting entity. 

Future Directions

The security robotics market is expected to increase by 71.8 billion in the year 20277. With the rise of post-lockdown exploration and people wanting to get out of the house, crime has seen an increase to a higher rate than before the pandemic8. Citizens from around the world are leaving the house to visit friends and family and with a new influx of pedestrian travelers, Nite Knight has a rare market opportunity to enrich the lives of thousands of people.  

As crime rate detection and artificial intelligence improves rapidly, so will our algorithms for crime prevention and for pedestrian safety. In an ideal world, we would be able to introduce Nite Knight as a callable ‘on-demand service’ that can be accessed from anyone’s smartphone from around the city. Major cities might be home to several robot deployment stations and the robots can travel autonomously to reach private customers when called. Smaller cities might have an outdoor charging station, similar to a Blue Bike or Bird scooter system, from which customers can use the smartphone app to unlock the robot and have it escort them. If this is to be a success, we will need to do the following: 

With AI advances, it is hoped that Nite Knight’s AI can become nearly 100% accurate. Additionally, by using real time training, our AI may be able to continuously update its training data set to include new or evolved threats that users face. This potential for an adaptable and high confidence danger detection model would prove Nite Knight a worthy safety companion for all pedestrian travelers.  


7. https://www.marketsandmarkets.com/PressReleases/security-robots.asp

8. https://thehill.com/policy/national-security/594291-how-violent-crime-has-gone-up-since-the-pandemic/