Challenges in Accessibility “ Bus drivers pass me by because it’s too busy.”“ I missed my appointment because I was unable to board the bus.”“I was asked to travel at a different time as I was trying to get to work.” For many passengers with disabilities, accessing public transport remains a significant challenge. This project aimed to revolutionize accessibility on London buses using AI-driven computer vision technology and cross-sector collaboration. This project advanced public transport accessibility through innovative AI computer vision technology and cross-sector collaboration. Partnering with Marshalls coaches and a local charity; Milton Keynes Centre for Integrated Living, the initiative prioritised wheelchair users, prams, and guide dogs, fostering inclusivity and equity in London buses. Data collection, testing, and stakeholder engagement delivered practical solutions with measurable impact on accessibility. AI-Powered Accessibility Solutions The project successfully delivered a pilot to TfL with an accurate AI model for detecting occupancy in disabled bays on London Buses, prioritizing the needs of wheelchair users, prams, and guide dogs for Transport for London (TfL). Success was measured through high recognition rates achieved by analysing hundreds of hours of CCTV footage and feedback from accessibility groups, confirming the solution’s practical utility. Key benefits included improved accessibility for disabled passengers, increased public awareness, and stronger partnerships with community stakeholders. Overcoming Challenges To ensure adaptability to real-world conditions, the AI underwent rigorous iterative testing and data refinement. This involved: Data collection with Marshalls Coaches and volunteers from Milton Keynes Centre for Integrated Living Recording multiple real-world scenarios, amassing hours of training footage Supplementary data from TfL, enhancing model robustness . Future-Ready Integration The LifeSafety team, in collaboration with the Digital Catapult programme, Bridge AI, and Innovate UK, developed an integrated hardware and software prototype tailored for seamless integration into the onboard London bus system. This innovative solution enables real-time detection of occupancy in disabled bays, providing notifications to waiting passengers about space availability to help them better plan their journeys. Additionally, automated onboard announcements could proactively inform passengers, encouraging them to vacate spaces ahead of a disabled person’s arrival while reducing possible driver/passenger confrontations. This cutting-edge approach aims to provide a smoother, more accessible, and inclusive boarding experience for all passengers, with the potential for rollout across all public transport in the future.
Overview This guide will help you execute a Python script designed to detect and blur faces in a video. Follow these steps carefully, and you should be able to run the script without any issues. Prerequisites Python Installation: Ensure you have Python installed on your computer. You can download it from python.org. Required Libraries: You need to install some Python libraries. Open a command prompt (Windows) or terminal (Mac/Linux) and run the following command: pip install opencv-python opencv-python-headless numpy Model Files: Download the required model files: Model file: res10_300x300_ssd_iter_140000.caffemodel from this link. Configuration file: deploy.prototxt from this link. Input Video: Have the input video file ready (e.g., wheel4.mp4). Place this file in the same directory as your script. Script Execution Steps Save the Script: Copy the provided script into a text editor and save it as face_blur.py in the same directory where you placed the model files and input video. Run the Script: Open a command prompt or terminal, navigate to the directory where the script and files are located, and run the script using the following command: python face_blur.py This command will execute the script, processing the video to detect and blur faces, and save the output video as wheel5.mp4. Troubleshooting Ensure all files are in the same directory: The script, model files, and input video should all be in the same directory. Check Python and Library Versions: Make sure you have compatible versions of Python and the required libraries. Model File Download: Verify that the model files are correctly downloaded and not corrupted.