5 AI and analytics trends shaping the sci-tech sectors


16 Feb 2024

Image: © ZinetroN/Stock.adobe.com

William Fry’s Barry Scannell explores the biggest trends in AI and analytics and the legal, societal and ethical challenges they pose.

Click here to view the full AI and Analytics Week series.

AI and analytics have become essential tools for innovation and transformation in the sci-tech sectors, from healthcare and biotechnology to energy and environment.

These technologies enable data-driven insights, automation and personalisation, enhancing efficiency, quality and impact. However, they also pose ethical, legal and social challenges that require careful consideration and regulation.

Trend 1: Explainable AI 

Explainable AI (XAI) refers to AI systems that can provide transparent and understandable explanations for their decisions and actions, enabling human oversight and trust. XAI is especially important for high-risk and high-impact applications, such as healthcare, finance and security, where the consequences of AI errors or biases can be severe.

For example, in healthcare, XAI can help doctors and patients understand how AI diagnoses or treatments are derived, and what factors influence them. In finance, XAI can help regulators and auditors verify the compliance and fairness of AI systems, such as credit scoring or fraud detection. In security, XAI can help authorities and citizens ensure the accountability and legitimacy of AI systems, such as facial recognition or surveillance.

XAI can also help comply with regulatory frameworks, such as the EU’s AI Act, which mandates transparency and accountability for high-risk AI systems. According to the AI Act, high-risk AI systems must provide users with clear and accurate information about their capabilities, limitations and expected performance, as well as the logic and criteria behind their outputs. XAI can help meet these requirements, by providing meaningful and accessible explanations for AI systems, that can inform and empower users and affected parties.

In fact, transparency and explainability is a key feature emerging from global efforts to legislate for and regulate AI.

Trend 2: Edge AI 

Edge AI refers to AI systems that run on local devices, such as smartphones, sensors or drones, rather than on cloud servers. Edge AI can offer several advantages, such as faster processing, lower latency, reduced bandwidth and enhanced privacy and security.

For example, in healthcare, edge AI can enable real-time and personalised AI applications, such as wearable devices that monitor vital signs, or smart glasses that assist surgeons. In energy, edge AI can enable efficient and resilient AI applications, such as smart meters that optimise consumption, or microgrids that manage supply and demand. In environment, edge AI can enable scalable and robust AI applications, such as sensors that detect pollution, or drones that monitor wildlife.

Research from the likes of Apple with papers such as LLM in a Flash show that efforts are being made to store complex models locally, and to compress models to a level where they can be useful as on-device technologies.

By moving AI computation to the edge, the carbon footprint of AI can be significantly reduced, as well as the dependency on internet infrastructure and data centres. Edge AI can help align AI with the principles of sustainability and circular economy by minimising waste and maximising efficiency.

Edge AI can help democratise AI access and empower users with more control and autonomy over their data and devices. By running AI locally, users can avoid sending their data to third parties, such as cloud providers or AI vendors, who may misuse or compromise their data.

Users can also customise and fine-tune their AI applications, according to their preferences and needs, without relying on external updates or services. Edge AI can help protect the privacy and security of users, as well as enhance their agency and choice.

Trend 3: Federated learning

Federated learning is a technique that allows AI models to learn from decentralised and distributed data sources, without requiring data to be stored or shared centrally. Federated learning can enable collaborative and privacy-preserving AI applications, such as medical diagnosis, fraud detection or smart cities, where data is sensitive, heterogeneous or geographically dispersed. For example, in medical diagnosis, federated learning can allow AI models to learn from data across different hospitals, clinics or regions, without exposing patient data or violating data protection laws.

In fraud detection, federated learning can allow AI models to learn from data across different banks, merchants or customers, without revealing financial data or compromising data security. In smart cities, federated learning can allow AI models to learn from data across different devices, vehicles or infrastructures, without transferring data or consuming bandwidth.

Federated learning can help overcome the challenges of data scarcity, privacy and security, as well as foster data sovereignty and social good. By allowing data to remain local and distributed, federated learning can respect the ownership and control of data holders, such as individuals, organisations or communities, who can decide how and when to participate in AI learning.

Trend 4: AI governance 

AI governance refers to the policies, principles and practices that guide the development, deployment and use of AI systems, ensuring that they are ethical, trustworthy and beneficial for society. AI governance can involve various stakeholders, such as governments, regulators, developers, users and civil society, who can collaborate and coordinate on setting standards, norms and rules for AI.

Developers can adopt ethical codes and best practices for AI, such as the IEEE’s Ethically Aligned Design, which provides guidelines and recommendations for human-centric and value-based AI. Users can engage and participate in AI design and evaluation, such as the Mozilla Foundation’s Trustworthy AI Toolkit, which offers tools and resources for co-creating and assessing AI systems. Civil society can advocate and educate for AI awareness and literacy, such as the AI Now Institute, which conducts research and outreach on the social implications of AI.

AI governance can also include mechanisms for monitoring, auditing and enforcing AI compliance, as well as providing redress and remedy for AI harms. For example, monitoring can involve collecting and analysing data and metrics on AI performance and behaviour, such as accuracy, fairness, or safety.

Auditing can involve verifying and validating the compliance and quality of AI systems, such as transparency, accountability or robustness. Enforcement can involve imposing sanctions and penalties for AI violations and misconduct, such as fines, bans or revocations. Redress and remedy can involve providing compensation and restoration for AI victims and affected parties, such as damages, corrections, or apologies.

AI governance can help address the risks and challenges of AI, such as bias, discrimination, privacy, security, accountability and human dignity, as well as promote the values and rights of AI users and affected parties.

AI governance can also help align AI with the principles of democracy and human rights, such as participation, representation or justice, as well as the goals of sustainable development and social good, such as health, education or environment. AI governance can help ensure that AI is developed and used in a responsible, ethical and beneficial manner, for the common good of humanity.

Trend 5: AI and the environment 

AI systems can consume a lot of energy and resources and generate a lot of emissions and waste. The environmental impact of AI can be reduced by adopting green and circular practices, such as using renewable energy sources, improving energy efficiency, and recycling and reusing materials and equipment.

AI and analytics are transforming the sci-tech sectors, offering new opportunities and solutions for innovation and impact. However, they also raise ethical, legal, and social issues that need to be addressed with care and responsibility.

By Barry Scannell

Barry Scannell is a consultant in William Fry’s Technology department.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.