NJDOT Tech Talk! Webinar – Research Showcase: Lunchtime Edition 2025

Video Recording: 2025 Research Showcase Lunchtime Edition

On May 14, 2025, the NJDOT Bureau of Research, Innovation, and Information Transfer hosted a Lunchtime Tech Talk! webinar, “Research Showcase: Lunchtime Edition 2025”, featuring four presentations on salient research studies. As these studies were not shared at the 26th Annual Research Showcase held in October 2024, the webinar provided an additional opportunity for the over 80 attendees from the New Jersey transportation community to explore the wide range of academic research initiatives underway across the state.

The four research studies covered innovative transportations solutions in topics ranging from LiDAR detection to artificial intelligence. The presenters, in turn, shared their research on assessing the accuracy of LiDAR for traffic data collection in various weather conditions; traffic crash severity prediction using synthesized crash description narratives and large language models (LLMs); non-destructive testing (NDT) methods for bridge deck forensic assessment; and traffic signal detection and recognition using computer vision and roadside cameras. After each presentation, webinar participants had an opportunity to ask questions to the presenters.


Presentation #1 – Assessing the Accuracy of LiDAR for Traffic Data Collection in Various Weather Conditions by Abolfazl Afshari, New Jersey Institute of Technology (NJIT)

Mr. Afshari shared insights from a joint research project between NJIT, NJDOT, and the Intelligent Transportation Systems Resource Center (ITSRC), which evaluated the accuracy of LiDAR in adverse weather conditions.

LiDAR (Light Detection and Ranging) is a sensing technology that uses laser pulses to generate detailed 3D maps of the surrounding area by measuring how long it takes for laser pulses to return after hitting an object. It offers high resolution and accurate detection, regardless of lighting, making it ideal for traffic monitoring in real-time.

The research study began in response to growing concerns about LiDAR’s effectiveness in varied weather conditions, such as rain, amid its increasing use in intelligent transportation systems. Mr. Afshari stated that the objective of the research was to evaluate and quantify LiDAR performance across multiple weather scenarios and for different object types—including cars, trucks, pedestrians, and bicycles—in order to identify areas for improvement.

To conduct the research, the team installed a Velodyne Ultra Puck VLP-32C LiDAR sensor with a 360° view on the Warren St intersection near the NJIT campus in Newark. Mr. Afshari noted that newer types of LiDAR sensors with enhanced capabilities may be able to outperform the Velodyne Ultra Puck during adverse weather. They also installed a camera at the intersection to verify the LiDAR results with visual evidence. The research team used data collected from May 12 to May 27, 2024.

The researchers obtained the weather data from Newark Liberty Airport station and utilized the Latin Hypercube Sampling (LHS) method to identify statistically diverse weather periods for evaluation and maintain a balance between clear and rainy days. They selected over 300 minutes of detection for the study.

The study area for the LiDAR detection evaluation

To evaluate how well the detection system performed under different traffic patterns, they divided the study area into two sections. The researchers used an algorithm for the LiDAR to automatically count the vehicles and pedestrians entering these two areas, then validated the LiDAR results by conducting a manual review of the video captured from the camera.

The research team found that, overall, the LiDAR performed well, though there were some deviations during rainy conditions. During rainy days, the LiDAR’s detection rate decreased for both cars and pedestrians, with the greatest challenges occurring in accurately detecting pedestrians. On average, the LiDAR would miss nearly .8 pedestrians and .7 cars per hour during rainy days, around 30 percent higher than on clear days.

Key limitations of the LiDAR detection identified by the researchers include: maintaining consistent detection of pedestrians carrying umbrellas or other large concealing objects, identifying individuals walking in large groups, and missing high-speed vehicles.

Mr. Afshari concluded that LiDAR performs reliably for vehicle detection but pedestrian detection needs enhancement in poor weather conditions, which would require updated calibration or enhancements to the detection algorithm. He also stated the need for future testing of LiDAR on other weather conditions such as fog or snow to further validate the findings.

Q. Do you think the improvements for LiDAR detection will need to be technological enhancements or just algorithmic recalibration?

A. There are newer LiDAR sensors available, which perform better in most situations, but the main component to LiDAR detection is the algorithm used to automatically detect objects. So, the algorithmic calibration is the most important aspect for our purposes.

Q. What are the costs of using the LiDAR detector?

A. I am not fully sure as I was not responsible for purchasing the unit.


Presentation #2 – Traffic Crash Severity Prediction Using Synthesized Crash Description Narratives and Large Language Models (LLM) by Mohammadjavad Bazdar, New Jersey Institute of Technology

Mr. Bazdar presented research from an NJIT and ITSCRC team effort focused on predicting traffic crash severity using crash description narratives synthesized by a Large Language Model (LLM). Predicting crash severity provides opportunities to identify factors that contribute to severe crashes—insights that can support better infrastructure planning, quicker emergency response, and more effective autonomous vehicle (AV) behavior modeling.

Previous studies have relied on traditional methods such as logit models, classification techniques, and machine learning algorithms like Decision Tree and Random Forests. However, Mr. Bazdar notes that these approaches struggle due to limitations in the data. Crash report data often contains numerous inconsistencies and missing values for varying attributes, making it unsuitable for traditional classification models. Even if you get a good result from the model, it cannot be used to reliably identify contributing factors because of all the data that is excluded.

To address this challenge, the research team explored the effectiveness of generating consistent and informative crash descriptions by converting structured parameters into synthetic narratives, then leveraging large language models (LLMs) to analyze and predict crash severity based on these narratives. Since LLMs can parse through different terminologies and missing attributes, it allows researchers to analyze all available data, and not the minority of crash data that has no inconsistencies or missing variables.

The research team used BERT, an Encoder Model LLM, to analyze over 3 million crash records from January 2010 to November 2022 for this study. Although crash reports often contain additional details, the team exclusively utilized information regarding crash time, date, geographic location, and environmental conditions. Additionally, they divided crash severity into three categories: “No Injury,” “Injury,” and “Fatal.”

The narratives synthesized by BERT include six sentences, with each sentence describing different features of the crash, such as time and date, speed and annual average daily traffic (AADT), and weather conditions and infrastructure. BERT then tokenizes and encodes the narrative to generate contextualized representations for crash severity prediction.

They also found that a hybrid approach—using BERT to tokenize crash narratives and generate crash probability scores, followed by a classification model like Random Forest to predict crash severity based on those scores—performed best. An added benefit of the hybrid model is that it produces comparable, if not better, results than the BERT model, in hours rather than days.

In the future, Mr. Bazdar and the research team plan to enhance their model by integrating spatial imagery, incorporating land use and environmental data, and utilizing decoder-based language models, hoping to achieve more effective results.

Q. How does your language model handle missing data fields?

A. The model skips missing information completely. For example, if there is a missing value for the light condition, the narrative will not mention anything about it. In traditional models, a report missing even one variable would have to be discarded. However, with the LLM approach, the report can still be used, as it may contain valuable information despite the missing data.

Q. What percentage of the traffic reports were missing data?

A. The problem is that while a single value like light condition, may be missing in only a small percentage of crash reports, a large portion—nearly half—of crash reports have some missing data or inconsistency.


Presentation #3 – Forensic Investigation of Bridge Backwall Structure Using Ultrasonic and GPR Techniques by Manuel Celaya, PhD, PE, Advance Infrastructure Design, Inc.

Dr. Celaya described his work performing non-destructive testing (NDT) on the backwall structure of a New Jersey bridge, utilizing Ultrasonic Testing (UT) and Ground Penetrating Radar (GPR).

The bridge in the study, located near Exit 21A on I-287, was scheduled for construction; however, NJDOT had limited information about its retaining walls. To address this, NJDOT enlisted Dr. Celaya and his firm, Advanced Infrastructure Design, Inc. (AID), to assess the wall reinforcements—mapping the rebar layout, measuring concrete cover, and detecting potential cracks and voids in the backwalls.

The team used a hand-held GPR system to identify the presence, location, and distribution of reinforcement within the abutment wall. The GPR device collects the data in a vertical and horizontal direction, indicating the distance of reinforcement like rebar and its depth of penetration. This information was needed to ensure that construction on the bridge above would not impact the abutment walls.

SAFT images of the bridge abutment produce by the Ultrasonic Testing

They also employed Ultrasonic Testing (UT), a method that uses multiple sensors to transmit and receive ultrasonic waves, allowing the team to map and reconstruct subsurface elements of the bridge wall. The system captures a detailed cross-sectional view of acoustic interfaces within the concrete using a grid-based measurement pattern, ensuring precise and reliable data collection. Additionally, they used IntroView to evaluate the UT data and produce Synthetic Aperture Focusing Technique (SAFT) images to illustrate and identify anomalies within the concrete.

AID also conducted NDT to assess the depth of embedded bolts in the I-287 bridge abutments using GPR scans, but aside from detecting steel rebar reinforcements, no clear signs of the bolts were found. However, the UT results offered valuable insights, revealing that the embedded bolts in the west abutment wall were deeper than those in the east abutment.

Q. What was the process workflow like for the Ultrasonic Testing?

A. It is not that intuitive compared to Ground Penetrating Radar. With GPR, you can clearly identify structures on the site. However, with UT, there has to be post-processing analysis in the office, it cannot be attained in the field. This analysis takes time and requires a certain level of expertise.


Presentation #4 – Traffic Signal Phase and Timing Detection from Roadside CCTV Traffic Videos Based on Deep Learning Computer Vision Methods by Bowen Geng, Rutgers Center for Advanced Infrastructure and Transportation

Mr. Geng shared insights from an ongoing Rutgers research project that evaluates traffic signal phase and timing detection using roadside CCTV traffic video footage, applying deep learning and computer vision techniques. Traffic signal information is essential for both road users and traffic management centers. Vehicle-based signal data supports autonomous vehicles and advanced Traffic Sign Recognition (TSR) systems, while roadside-based data aids Automated Traffic Signal Performance Measures (ATSPM) systems, Intelligent Transportation Systems (ITS), and connected vehicle messaging systems.

While autonomous vehicles can perceive traffic signals using on-board camera sensors, roadside detection relies entirely on existing infrastructure such as CCTV traffic footage. Mr. Geng noted that advancements in computer vision modeling provides a resource-efficient tool for improving roadside traffic signal data collection, compared to other potential solutions like infrastructure upgrades, which would be costly. For the study, the researchers decided to develop and implement methodologies for traffic signal recognition using CCTV cameras, and evaluate the effectiveness of different computer vision models.

Most previous studies have concentrated heavily on vehicle-based traffic signal recognition, while roadside-based TSR has received relatively limited attention, with some previous studies using vehicle trajectory to determine traffic signal status. Furthermore, early research relied on traditional image processing techniques such as color segmentation, but more recent studies have shifted toward a two-step pipeline using machine learning tools like You Only Look Once (YOLO) or deep learning-based end-to-end detection methods. Both the two-step pipeline and end-to-end detection approaches have their advantages and drawbacks. The two-step pipeline uses separate models for detection and classification, requiring coordination between stages and creating slower process speeds, but making it easy to debug. In contrast, end-to-end detection is faster and more streamlined but more difficult to debug.

Real-time traffic signal detection using the research model

In this study, the researchers adopted three different methodologies; two using the two-step pipeline, and one using an end-to-end detection model. All three models employed YOLOv8 for object detection; however, they differed in color classification methods. The researchers used video data from the DataCity Smart Mobility Testing Ground in downtown New Brunswick, across five signalized intersections.

The model achieved an overall accuracy of 84.7 percent, with certain signal colors detected more accurately than others. Mr. Geng shared that the research team was satisfied with these results. They see potential for the model to be used to support real-time traffic signal data logging and transmission for ATSPM and connected vehicle messaging system applications. 

Q. How many cameras did you have at each intersection?

A. For each intersection we had two cameras facing two different directions. For some intersections, we had one camera facing north and another facing south, or one facing east and the other facing west.

Q. What did you attribute to the differences in color recognition?

A. There was some computing resource issue. Since we are trying to implement this in real-time, there are difficulties balancing accuracy with possible latency issues and processing time.

A recording of the webinar is available here.

Did You Know? AI in Transportation

Artificial Intelligence (AI) is rapidly reshaping transportation by improving safety, efficiency, and sustainability across various applications. From real-time traffic monitoring to predictive infrastructure maintenance, AI is becoming a critical tool for advancing transportation systems in New Jersey and nationwide. This article covers the use of AI in transportation research and implementation, with examples from the 2024 NJDOT Research Showcase, New Jersey and other state DOTs.  


AI on Display at the 2024 Research Showcase  

NJDOT held its 2024 Research Showcase on October 23, highlighting innovative transportation research and its implementation throughout New Jersey. During the morning panel discussion, Giri Venkiteela, Innovation Officer in NJDOT’s Bureau of Research, Innovation & Information Transfer, stated that Artificial Intelligence (AI) held significant promise for producing economic and environmental advancements in transportation due to its real-time predictive capabilities and proposed that NJDOT adopt protocols that can adapt to the pace of AI. Similar insights were heard throughout the showcase, where AI emerged as a central theme across numerous presentations and discussions.

AI, encompassing subcategories like Machine Learning (ML) and Artificial Neural Networks (ANN), allows researchers to analyze and model large data sets in real-time, saving significant labor hours and producing efficient, immediate results. Throughout the showcase, various projects ranging from enhancing pedestrian safety to predicting natural disasters utilized AI-based models.  

Deep Patel received the 2024 Outstanding University Student in Transportation Research. As part of a research team at Rowan University, Patel deployed the AI model, YOLO-v5, to analyze video data from multiple New Jersey intersections, providing information on pedestrian volumes, traffic volumes, and the rate of vehicles running red lights, among other variables. The team then ranked intersection safety using the metrics analyzed by the AI model. 

Slide from Meiyin Liu’s presentation on real-time traffic flow analysis.

Patel’s research exemplifies the growing trend of integrating AI methods into traffic safety analyses, which continued into several presentations given in the afternoon Safety Breakout Sessions. Here, Rutgers professor Meiyin Liu presented her method for estimating real-time traffic flow through a combination of Unmanned Aerial Systems (UAS) and deep learning algorithms. A computer-mounted UAS would be used to record video data of a highway, which then gets transmitted to the YOLO-v5 computer vision AI that detects vehicle volume and estimates speed. This data collection method facilitates a real-time traffic flow analysis across a comprehensive geographic coverage that could enhance traffic performance and crash risk prediction. Afterward, Branislav Dimitrijevic, a member of an NJIT research team, showcased an AI-driven project that utilized LiDAR technology and YOLO-v5 computer vision to activate a Rectangular Rapid Flashing Beacon (RRFB) when pedestrians approached crosswalks, enhancing road safety.

Poster by Indira Prasad from the 2024 NJDOT Research Showcase.

Multiple posters featured at the Research Showcase contained elements of AI, including a poster titled “Integrating AI to Mitigate Climate Change in Transportation Infrastructure” made by Indira Prasad and “Artificial Intelligence Aided Railroad-Grade Crossing Vehicular Stop on Track Detection and Case Studies” highlighted by researchers at Rutgers’ CAIT. 

AI’s critical role in the maintenance and preservation of infrastructure was also evident in the afternoon’s Sustainability Breakout Sessions. Indira Prasad, a Stevens Institute of Technology graduate student, conducted a review of future innovations in sustainable and resilient infrastructure. Prasad explained how AI’s pattern recognition capabilities could be used to analyze large data pools and help forecast natural disasters, enabling a rapid response to augment existing infrastructure. Surya Teja Swarna, a Rowan University postdoctoral researcher, demonstrated an innovative approach where state DOTs could use mobile phones mounted on vehicles to record roadway surface deformations, which then would be analyzed in real-time by an AI computer vision software, drastically reducing the time and costs required for road condition assessments.


Deployment of AI in Programs and Project Implementation  

In addition to research from academic institutions, State DOTs and various other state, local and public transportation organizations have started to deploy AI-based methods and tools on various programs and projects. 

Peter Jin, a Rutgers professor, received the 2024 NJDOT Research Implementation Award for his role in the New Brunswick Innovation Hub Smart Mobility Testing Ground (Data City SMTG).  The project, created in partnership with NJDOT, the City of New Brunswick, and Middlesex County, functioned as a living laboratory for transportation data collection, containing Self-Driving Grade LiDAR sensors and computing devices across a 2.4-mile multi-modal corridor. Private and public sectors can use the data to enhance their advanced driving systems, automated vehicle models, and other AI-based projects. 

Additionally, NJDOT has established a program integrating unmanned aerial systems (UAS) into its transportation operations. UASs provide high-quality survey and data mapping information, which, when paired with AI-based technologies, can be analyzed in real time to document roadway characteristics or conduct damage assessments for natural disasters. Meiyin Liu’s real-time traffic flow assessment research is one example of how UAS can be paired with AI. 

The methods used by CAIT to detect and analyze railroad-grade crossings. Courtesy of CAIT.

The use of AI for railroad-grade crossing detection has been demonstrated on several projects in recent years.  NJ TRANSIT, the statewide transit agency, recently received a $1.6 million grant from USDOT to implement a railroad-grade crossing detection system. The system, developed in partnership with CAIT researchers, will be deployed at 50 grade crossings and aboard five light rail vehicles throughout the state. The railroad-grade crossing detection system features multiple cameras on grade crossings and light rail vehicles to record data for an AI computer vision model that monitors and analyzes grade crossing behavior such as near-miss incidents.

For a project recently completed with the Federal Railroad Administration, CAIT researchers examined “stopped-on-track” incidents, which are a leading cause of grade-crossing accidents. During the poster session at the 2024 NJDOT Research Showcase, CAIT’s researchers highlighted a detection system for identifying stopped-on-track incidents and case study examples of how the critical locations can be addressed through design or other interventions. They found that targeted intervention using the AI detection system could reduce stopped-on-track incidents by up to 86 percent.

Visual example of how LiDAR senses the surrounding environment.

Other State DOTs have also started to implement AI-based programs. The Georgia Department of Transportation, in partnership with Georgia Tech, completed a survey of 22,000 road signs around potentially dangerous road curves using AI and vehicle-mounted mobile phone cameras to improve safety at road curves. The Texas Department of Transportation (TxDOT) assessed pavement conditions using LiDAR and AI. TxDOT’s project shares similarities with the research presented by Surya Teja Swarna, but it utilized LiDAR instead of a mobile phone camera.  

In 2022, the Nevada Department of Transportation partnered with the Nevada Highway Patrol, the Regional Transportation Commission of Southern Nevada, and a private technology company to launch an AI-based platform that facilitated the reporting of real-time crash locations. A study on this project found that the AI platform uncovered 20 percent more crashes than previously reported and reduced emergency response time by nine to ten minutes on average while eliminating the need to dial for help.


Recent National Research  

Responses from state DOT officials demonstrate the varied applications of ML solutions. Courtesy of NCHRP.

The National Cooperative Highway Research Program (NCHRP) published a 2024 research report,  Implementing and Leveraging Machine Learning at State Departments of Transportation, that identifies trends in AI transportation research and implementation with a specific focus on machine learning and creates a roadmap for future implementation. The researchers surveyed State DOTs on plans regarding AI, reported case studies of ML implementation by State DOTs, and listed strategies to help DOTs facilitate further inclusion of AI solutions.

The survey of the state DOT officials covered various topics, including the transportation agency’s familiarity with AI methods and tools, types of methods and applications utilized, and challenges in implementation. Among the challenges to implementation, DOT officials noted a lack of public trust, insufficient data collection and storage infrastructure, and, most commonly, scarce labor with knowledge of AI. Most computer and data scientists choose to work in the private sector, and it can be difficult to recruit them to a transportation agency.

The NCHRP report also included multiple case studies from state DOTs such as Nebraska, California, and Iowa, documenting the experiences of these agencies in developing and implementing ML programs.

  • Nebraska DOT (NDOT) used a computer vision Convolutional Neural Network (CNN) algorithm to detect and analyze guardrail quality. NDOT recorded 1.5 million images of guardrail data and used AI to save time and money compared to the manual detection alternative. Among the challenges, NDOT observed that their agency did not have the necessary infrastructure to process large volumes of data and lacked in-house ML expertise. The agency solved the former issue by using a private vendor to process the data and the latter by collaborating with consultants from the University of Nebraska. The algorithm achieved accuracies of 97 percent for guardrail detection and 85 percent for their classification into three types. 
  • The California Department of Transportation (Caltrans) has leveraged AI/ML applications across various projects and partnered with numerous tech companies, including Google. One area of emphasis for Caltrans has been workforce capacity development. While most staff do not have experience with AI-based data analytics, they do have experience with GIS. Caltrans has worked with GIS tool developers to incorporate ML functionalities into the basic user interface of GIS programs, making it more intuitive for their workforce. 
  • Iowa State University, funded by the Iowa Department of Transportation, developed a real-time ML tool to monitor highway performance, enabling a rapid response to traffic congestion. The researchers identified the need for high-performance computing as a significant challenge preventing large-scale implementation. Mass deployment of the tools used in the research study would require a considerable expense, partially due to the stipulation that the code be at least 99 percent reliable. 

For more information on the application and implementation of AI by transportation agencies, the National Academies of Sciences, in collaboration with the NCHRP, published two additional reports in 2024. One, Artificial Intelligence Opportunities for State and Local DOTs: A Research Roadmap, utilizes machine learning methods to analyze research trends in AI and how State DOTs can implement the research. The other, Implementing Machine Learning at State Departments of Transportation: A Guide, serves as a complementary document to the NCHRP report on implementing and leveraging machine learning. 

On a national level, USDOT published its Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence compliance plan in September 2024. USDOT has taken several measures to advance the implementation of AI, including forming an AI Governance Board chaired by the Deputy Secretary and vice-chaired by a new Chief Artificial Intelligence Officer (CAIO), creating an AI Accelerator Roadmap, and providing funds for AI research and implementation.

Lastly, the American Association of State Highway and Transportation Officials (AASHTO) hosted a knowledge session examining the role of AI in transportation in April 2024. Practitioners on the panel highlighted the potential of AI in eliminating the dangerous aspects of data collection and allowing for proactive solutions rather than reactively responding to crashes or injuries.  The panelist discussion touched upon the importance of building trust in a period of rapid AI development, noting the critical role that academic researchers can play as partners with state DOTs to advance and develop the AI technology in ways beneficial for traffic safety and workforce safety, among other topics.


TRID Database 

Artificial Intelligence-based research can be found via TRB’s TRID database. The following are some relevant articles published on recent New Jersey transportation research in AI.

  • Bagheri, M., B. Bartin, and K. Ozbay. (2023). Implementing Artificial Neural Network-Based Gap Acceptance Models in the Simulation Model of a Traffic Circle in SUMO. Transportation Research Record: Journal of the Transportation Research Board, Vol. 2677. https://trid.trb.org/View/2166547
  • Hasan, A.S., M. Jalayer, S. Das and M. Bin Kabir. (2024). Application of machine learning models and SHAP to examine crashes involving young drivers in New Jersey. International Journal of Transportation Science and Technology, Vol. 14. https://trid.trb.org/View/2162338
  • Hasan, A.S., M. Jalayer, S. Das and M. Bin Kabir. (2023). Severity model of work zone crashes in New Jersey using machine learning models. Journal of Transportation Safety & Security, Vol. 15. https://trid.trb.org/View/2190127
  • Najafi, A., Z. Amir, B. Salman, P. Sanaei, E. Lojano-Quispe, A. Maher, and R. Schaefer. (2024). A Digital Twin Framework for Bridges. ASCE International Conference on Computing in Civil Engineering 2023, American Society of Civil Engineers, pp 433-441. https://trid.trb.org/view/2329319  
  • Nayeem, M., A. Hasan, M. Jalayer. (2023). Investigation of Young Pedestrian Crashes in School Districts of New Jersey Using Machine Learning Models. International Conference on Transportation and Development 2023, American Society of Civil Engineers. https://trid.trb.org/View/2196775  
  • Patel, D., P. Hosseini, and M. Jalayer. (2024). A framework for proactive safety evaluation of intersection using surrogate safety measures and non-compliance behavior. Accident Analysis & Prevention, Vol. 192. https://trid.trb.org/View/2242428
  • Zaman, A., Z. Huang, W. Li, H. Qin, D. Kang, and X. Liu. (2023). Artificial Intelligence-Aided Grade Crossing Safety Violation Detection Methodology and a Case Study in New Jersey. Transportation Research Record: Journal of the Transportation Research Board, Vol. 2677. https://trid.trb.org/VCiew/2169797  
  • Zaman, A., Z. Huang, W. Li, H. Qin, D. Kang, and X. Liu. (2024). Development of Railroad Trespassing Database Using Artificial Intelligence. Rutgers University, New Brunswick, Federal Railroad Administration, 80p. https://trid.trb.org/view/2341095 

Additional Resources

Zone for AI to look for trespassing at railroad crossing

Research Spotlight: Exploring the Use of Artificial Intelligence to Improve Railroad Safety

Partnering with the Federal Railroad Administration, New Jersey Transit and New Jersey Department of Transportation (NJDOT), a research team at Rutgers University is using artificial intelligence (AI) techniques to analyze rail crossing safety issues. Utilizing closed-circuit television (CCTV) cameras installed at rail crossings, a team of Rutgers researchers, Asim Zaman, Xiang Liu, Zhipeng Zhang, and Jinxuan Xu, have developed and refined an AI-aided framework for detection of railroad trespassing events to identify the behavior of trespassers and capture video of infractions.  The system uses an object detection algorithm to efficiently observe and process video data into a single dataset.

Rail trespassing is a significant safety concern resulting in injuries and deaths throughout the country, with the number of such incidents increasing over the past decade. Following passage of the 2015 Fixing America’s Surface Transportation (FAST) Act that mandated the installation of cameras along passenger rail lines, transportation agencies have installed CCTV cameras at rail crossings across the country.  Historically, only through recorded injuries and fatalities were railroads and transportation agencies able to identify crossings with trespassing issues. This analysis did not integrate information on near misses or live conditions at the crossing. Cameras could record this data, but reviewing the video would be a laborious task that required a significant resource commitment and could lead to missed trespassing events due to observer fatigue.

Zaman, Liu, Zhang, and Xu saw this problem as an opportunity to put AI techniques to work and make effective use of the available video and automate the observational process in a more systematic way. After utilizing AI for basic video analysis in a prior study, the researchers theorized that they could train an AI and deep learning to analyze the videos from these crossings and identify all trespassing events.

Working with NJDOT and NJ TRANSIT, they gained access to video footage from a crossing in Ramsey, NJ.  Using a deep learning-based detection method named You Only Look Once or YOLO, their AI-framework detected trespassings, differentiated the types of violators, and generated clips to review. The tool identified a trespass only when the signal lights and crossing gates were active and tracked objects that changed from image to image in the defined space of the right-of-way. Figure 1 depicts the key steps in the process for application of AI in the analysis of live video stream or archived surveillance video.

Figure 1. General YOLO-Based Framework for Railroad Trespass Detection illustrates a step-by-step process involving AI algorithm configurations, YOLO-aided detection, and how trespassing detection incidents are saved and recorded to a database for more intensive analysis and characterization (e.g., trespasser type, day, time, weather, etc.)

The researchers applied AI review to 1,632 hours of video and 68 days of monitoring. They discovered 3,004 instances of trespassing, an average of 44 per day and nearly twice an hour. The researchers were able to demonstrate how the captured incidents could be used to formulate a demographic profile of trespassers (Figure 2) and better examine the environmental context leading to trespassing events to inform the selection and design of safety countermeasures (Figure 3).

Figure 2: Similar to patterns found in studies of rail trespassing fatalities, trespassing pedestrians were more likely to be male than female. Source: Zhang et al
Figure 3: Trespassing events were characterized by the gate angle and timing before/after a train pass to isolate context of risky behavior. Source: Zhang et. al

A significant innovation from this research has been the production of the video clip that shows when and how the trespass event occurred; the ability to visually review the precise moment reduces overall data storage and the time needed performing labor-intensive reviews. (Zhang, Zaman, Xu, & Liu, 2022)

With the efficient assembly and analysis of video big data through AI techniques, agencies have an opportunity, as never before, to observe the patterns of trespassing. Extending this AI research method to multiple locations holds promise for perfecting the efficiency and accuracy in application of AI techniques in various lighting, weather and other environmental conditions and, more generally, to building a deeper understanding of the environmental context contributing to trespassing behaviors.

In fact, the success of this AI-aided Railroad Trespassing Tool has led to new opportunities to demonstrate its use. The researchers have already expanded their research to more crossings in New Jersey and into North Carolina and Virginia. (Bruno, 2022) The Federal Railroad Administration has also awarded the research team a $582,859 Consolidated Rail Infrastructure and Safety Improvements Grant to support the technology’s deployment at five at-grade crossings in New Jersey, Connecticut, Massachusetts, and Louisiana. (U.S. DOT, Federal Railroad Administration, 2021) Rutgers University and Amtrak have provided a 42 percent match of the funding.

The program’s expansion in more places may lead to further improvements in the precision and quality of the AI detection data and methods.  The researchers speculate that this technology could integrate with Positive Train Control (PTC) systems and highway Intelligent Transportation Systems (ITS). (Zhang, Zaman, Xu, & Liu, 2022) This merging of technologies could revolutionize railroad safety. To read more about this study and methodology, see this April 2022 Accident Analysis & Prevention article.

References

Bruno, G. (2022, June 22). Rutgers Researchers Create Artificial Intelligence-Aided Railroad Trespassing Detection Tool. Retrieved from https://www.rutgers.edu/news/rutgers-researchers-create-artificial-intelligence-aided-railroad-trespassing-detection-tool

NJDOT Technology Transfer. (2021, November 8). How Automated Video Analytics Can Make NJ’s Transportation Network Safer and More Efficient. Retrieved from https://www.njdottechtransfer.net/2021/11/08/automated-video-analytics/

Tran, A. (n.d.). Artificial Intelligence-Aided Railroad Trespassing Data Analytics: Artificial Intelligence-Aided Railroad Trespassing Data Analytics:.

United States Department of Transportation: Federal Railroad Administration. (2021). Consolidated Rail Infrastructure and Safety Improvements (CRISI) Program: FY2021 Selections. Retrieved from https://railroads.dot.gov/elibrary/consolidated-rail-infrastructure-and-safety-improvements-crisi-program-fy2021-selections

Zaman, A., Ren, B., & Liu, X. (2019). Artificial Intelligence-Aided Automated Detection of Railroad Trespassing. Journal of the Transportation Research Board, 25-37.

Zhang, Z., Zaman, A., Xu, J., & Liu, X. (2022). Artificial intelligence-aided railroad trespassing detection and data analytics: Methodology and a case study. Accident Analysis & Prevention.