Skip to main content

Lingua Café: Arabic, Hebrew, and Hostility: reporting tools & accessibility in Palestine

22.11.2023 | In Conversation with Rama Salahat

Read in Spanish, اللغة العربية and Français

logo LC

Rama Salahat, our guest for this Café Lingua Chat, is Director of Technology at 7amleh - The Arab Centre for Social Media Development. 7amleh is a Palestinian non-profit non-governmental organisation that aims to create a safe, fair and free digital space for Palestinians and everyone else. 7amleh’s work covers a wide range of areas, including raising awareness through educational and media campaigns, holding networking sessions and events, and preparing guides that are accessible to all.

The efforts of 7amleh focus on gathering in-depth information on digital rights violations, with the aim of making digital spaces fairer. 7amleh publishes a large number of annual, quarterly, monthly and weekly reports, documenting each stage of its journey towards achieving these goals.

Let’s start by talking about the ways in which Artificial Intelligence (AI)/ Algorithms have been beneficial and a threat in the current Palestinian crisis. For instance, how has AI been used to censor and silence content about Gaza?

Artificial Intelligence (AI) and algorithms represent a versatile toolkit with vast potential applications, both beneficial and concerning, particularly in the context of the current Palestinian crisis.

The misuse of AI can have far-reaching consequences, but in my opinion, the bigger problem is the inherent prevalence of biases ingrained in commonly used AI systems. AI heavily relies on data, and a significant portion of this data is sourced from the internet, which historically tends to be more representative of powerful groups, take media outlets as an example. This bias in data collection can lead to skewed results and reinforce existing power dynamics, and with the growing reliance on data and automation could possibly result in a whole new form of an insidiously facade biased system that makes the important decisions.

Another problematic side of utilizing AI is the variance of support across different languages, in our context, Hebrew has much limited support compared to Arabic, which poses a challenge for researchers and developers studying patterns in Hebrew text, creating an imbalance in AI accessibility and making it harder to make research-based decisions and recommendations. In my experience with 7amleh, this language barrier prompted the development of a dedicated language model to address the gap and ensure equitable representation and analysis of both Arabic and Hebrew content.

Furthermore, AI’s impact on the Palestinian crisis extends beyond language biases, the “Automated Apartheid: How facial recognition fragments, segregates and controls Palestinians in the OPT” report by Amnesty International sheds light on the use of facial recognition technologies for surveillance and control in Palestine, raising serious ethical concerns, as it fragments and segregates communities, reinforcing a system of control.

Can you share examples of AI/algorithms that have been used to address complex societal problems or enhance decision-making processes? [Follow up] Has there been some monitoring of hate speech used since the start of the attacks?

In Palestine, the use of AI to address societal problems has been relatively limited, with its primary application observed in commercial settings rather than within the context of human rights. Notably, 7amleh stands out as a pioneering organization, leading the way in utilizing AI to tackle pressing issues in Palestine.

At 7amleh, we embarked on an initiative focused on combating hate speech and violence on social media platforms, we have been developing a sophisticated language model designed to analyze and categorize instances of hate speech and violence in both Hebrew and Arabic, our Hebrew model has demonstrated significant success, with high accuracy metrics. We put this tool into action in March 2023 during the attacks on Huwara, where it effectively tracked instances of hate speech and violence. Continuing our commitment to promoting online safety, we are currently using this AI solution to detect and monitor hate speech and violence since October 7th, check out our live dashboard here.

This innovative use of AI at 7amleh marks a pivotal step in harnessing technology to address societal challenges, particularly in the realm of human rights. While AI applications in Palestine are still evolving, initiatives like ours showcase the potential for technology to contribute positively to pressing issues within the region.

Which are the specific negative and positive effects of social media AI for people with disabilities?

Navigating the impact of AI in social media on individuals with disabilities brings forth both concerns and considerations. One significant worry revolves around the inherent biases in AI, which might affect the accuracy and effectiveness of assistive technologies designed to aid people with disabilities.

In light of recent events since October 7th, there has been a notable surge in diverse online content related to the situation, images, videos and texts, however, people are employing various techniques to bypass online censorship, altering writing styles or manipulating images to evade automated algorithms. While these strategies serve to minimize censorship and spread awareness on what’s happening, they pose a challenge for assistive technologies that read text aloud or describe images, potentially rendering them less effective.

Another aspect to ponder is the functionality of algorithms designed to describe images for the visually impaired. As AI is known to carry biases, questions arise about the accuracy of these algorithms in depicting images without censorship. Ensuring that individuals with disabilities have equal access to uncensored data becomes crucial, raising concerns about the reliability of these AI tools for providing an accurate representation of online content.

How can we hold big tech companies, such as Facebook (Meta), Twitter (X) and TikTok accountable for AI/algorithmic design?

Holding big tech companies accountable for algorithmic/AI bias involves multifaceted approaches. Firstly, transparency is crucial; companies must disclose their AI systems' inner workings, including the datasets used for training algorithms. Audits by independent bodies could ensure the fairness and accuracy of these datasets, revealing biases or inaccuracies that might perpetuate discrimination. Additionally, implementing ethical guidelines or standards for AI development and usage can create a framework to prevent biased algorithms. Regular evaluations, reviews, and continuous improvements to AI models are essential to identify and rectify biases. Legal frameworks and regulations specifically targeting algorithmic accountability might also be necessary, ensuring that companies are legally responsible for discriminatory outcomes of their AI systems. Lastly, fostering diverse teams involved in AI development can mitigate biases in datasets and algorithms by incorporating a broader range of perspectives and experiences.

Thank you so much for sharing your experiences and significant contributions. How can the Lingua Café and digital rights community support 7amleh?

Thank you for providing us with the opportunity to discuss our projects and initiatives!

The most impactful way to support 7amleh is by actively spreading awareness about our initiatives across all media outlets, sharing our work amplifies its reach, fostering a broader understanding of the challenges Palestinians address.

In addition to raising awareness, collaboration is highly valued as it allows us to pool diverse expertise, creating a more robust impact.

We welcome ideas, suggestions, and thoughts on how to enhance and expand our work. If you have innovative concepts or believe there are ways to build upon our efforts, please reach out. And for those with technical skills, we would love to work together to implement improvements and innovations that contribute to the effectiveness of our initiatives.

Financial support is also crucial as we are a non-profit and rely on external funding. Donations directly through our website provide the resources necessary to sustain and expand our efforts in addressing digital rights issues.