You don't think of monster trucks when it comes to safety…

You don't think of monster trucks when it comes to safety…

An holistic overview of how to design EdTech products safely in the wake of AI

John Quy

John Quy

Introduction

In the rapidly evolving landscape of software development where we seem to be eternally on the steeper and steeper curve of Moore's law endlessly compounding with artificial intelligence (AI), ensuring the safety and reliability of EdTech products is paramount and also very challenging. This is particularly crucial when these products are used in sensitive areas such as early school education and within communities of disadvantaged children. The goal is to create AI systems that are not only efficient and innovative but also secure, transparent, and equitable. To achieve this, a multifaceted approach involving strict quality control, comprehensive testing, and a commitment to ethical standards is essential from the get go. This follows along with adherence to best practices in software development.

Essentially how to be as boring as possible but for the right reasons…

Discussing AI safety in the context of EdTech requires acknowledging that media discussions about "regulating" AI often focus on one type - AGI. Recent developments like the formation of the new DSIT body in the UK and the AI Safety Summit at Bletchley, where efforts are made to position the UK as a leader in cutting-edge technology, tend to focus more on aligning these super-capable AGI rather than the hyper-capable Gen or narrow AI currently prevalent in most EdTech products. These products are highly efficient in specific tasks, unlike AGI, which can learn and replicate various functions autonomously. This article will highlight key considerations for the latter type of AI alignment, emphasising the importance of human oversight or "in the loop" technology. *If you ever doubt whether humans slow things down or are unnecessary, the F35 fighter jet still carries a manned pilot even though they could be computer controlled…. These subtitles form the basis for creating new safe EdTech products, and will be further explored in subsequent deep dives and reports featuring insights from educators and industry data.

Quality Control and Compliance with Standards

The first step towards ensuring AI safety is adherence to rigorous quality control standards. Products should comply with or exceed current British and European standards, maintaining detailed records of development and testing for a minimum of ten years post-release. This guarantees that every product entering the market has been thoroughly vetted and is fit for its intended purpose. For instance, Claude AI, a leading developer in the field, has set a commendable example in prioritising safety in its large-scale language models (LLMs).

Companies must also comply with industry-specific regulations and international standards such as ISO/IEC 27001 for information security, which typically incurs internal costs of around £5,000 to £10,000 for human resources and auditing of companies under 20 employees. Additionally, adherence to the Information Commissioner's Office (ICO) advice and guidance and the General Data Protection Regulation (GDPR) in the European Union is crucial for data privacy and any communication or resource generation and processing.

Bill Gates

Image Credit: Inc. Magazine

Reducing Risks: Testing, Transparency, and Bias Mitigation

Reducing risks like AI hallucinations, misinformation, and bias is crucial. In educational settings, uninformed adoption of AI can lead to erroneous decisions that impact student learning and their future economic prospects. AI products should undergo alpha testing, third-party compliance checks, and incremental small-scale testing. Double-blind studies, which will be published in the upcoming safety newsletter by Pocket Teacher, provide valuable insights into bias mitigation and risk minimisation. For instance there is a possibility that widespread OCR and AI can mitigate some bias in handwriting based assessments due to reduced preconceived human stereotypes or "good" handwriting or "bad" handwriting.

Advanced Techniques in Software Safety

Employing computer simulations and modelling is essential to predict AI performance under various conditions. These simulations help identify potential safety issues, allowing for pre-emptive improvements. Safety features like rest and transit encryption, firewalls, and intrusion detection systems should be integrated into the software design to enhance data protection and prevent security breaches. Basically hiring internal white hats and or virtual sandboxing high load testing.

Material Selection and Crash Energy Management

In software terms, material selection refers to choosing programming languages, frameworks, and libraries based on their security and reliability. Crash energy management involves designing systems that can gracefully handle errors and exceptions, thus maintaining functionality and preventing system crashes.

In software development, the concept of material selection and crash energy management is pivotal for ensuring the safety and usability of applications. This concept extends to the choice of programming languages, frameworks, and libraries, which must be selected based on their security, reliability, and suitability for the task at hand. In this context, an analysis of Next.js, Laravel, and Ruby on Rails is instrumental in understanding their respective strengths and weaknesses in terms of safety and usability.

Next.js:🔥

Next.js, a React framework, is renowned for its server-side rendering capabilities, which enhance the performance and SEO of web applications. From a security standpoint, Next.js provides automatic protection against XSS attacks by securely rendering user input. It also supports static site generation, which can reduce server-side vulnerabilities. In terms of crash energy management, Next.js's robust error handling mechanisms allow for graceful degradation of services, ensuring that applications remain functional even when errors occur. However, its reliance on the Node.js environment means developers must be vigilant about dependencies and their security implications.

Laravel:❤️

Laravel, a PHP framework, is highly regarded for its elegant syntax and robust features, making it a preferred choice for rapid application development. It excels in security through features like built-in authentication, SQL injection protection, and cross-site request forgery (CSRF) protection. Laravel's eloquent ORM also ensures safe database interactions. For crash energy management, Laravel's unique exception handling and logging tools enable developers to effectively manage and mitigate errors, ensuring application stability. However, Laravel applications may require additional optimisation for high performance, and developers must be cautious about third-party packages for extended functionality.

Ruby on Rails:🔧

Ruby on Rails, often referred to as Rails, is a model-view-controller (MVC) framework known for its convention over configuration philosophy, which streamlines the development process. In terms of security, Rails offers strong protections against common vulnerabilities like SQL injection and CSRF attacks. Its active record ORM also promotes safe database practices. Rails is designed to recover from crashes efficiently, with detailed error logs and a supportive community that regularly contributes to its stability and security patches. However, its runtime efficiency can be a concern, and Rails applications may require careful tuning and scaling strategies for optimal performance.