In times of crisis, such as the one we have experienced with the COVID-19 pandemic, the use of artificial intelligence (AI) has become increasingly prevalent in many areas. AI has been used to help us track the spread of the virus, develop new treatments, and even predict outbreaks. However, as we continue to rely more and more on AI, we must also consider the risks associated with its use.
 
While we are speaking war rages across easter europe, and there are immediate threats to the global security like the potential war subsequent the potential crisis on the Taiwan Area between China and Russia. 
 
And so the whole humanity is, generally speaking, scared, tired and triggered, by the continuous exposure to psychic trauma direct, and indirect, from the continuous challenge of living under constant threat and in deteriorated condition of living.
 
The general sentiment in fact does influence the training of AI, not only the generalist and commercial one, but also the specialized one available to security forces, the military and the political decision makers.
 
The general sentiment is the overall mood or feeling of a particular group of people. It is influenced by various factors such as culture, politics, and media. The general sentiment can be positive, negative, or neutral.
When it comes to AI training, the general sentiment can have a significant impact. If the general sentiment is negative, it can lead to biased and flawed AI models. For example, if society has a negative sentiment towards a particular group of people, AI trained on that data may also exhibit bias towards that group.
 
One of the primary risks of writing AI during crisis times is the potential for biased or incomplete data. In times of crisis, we often have limited information and are forced to make decisions based on incomplete data. If we feed this incomplete or biased data into AI algorithms, the resulting models will be similarly flawed. This can lead to decisions that are not in the best interests of the people we are trying to help.
 
Sentiment analysis is a common technique used in natural language processing to identify and extract subjective information from text, such as opinions, attitudes, and emotions. In the context of military AI training, sentiment analysis can be used to analyze the sentiments of military personnel, civilians, or adversaries towards certain events or situations.
Here are some examples of how sentiment can influence military AI training:
 
Bias in data: If sentiment analysis is used to train an AI model to recognize the sentiment of text, the data used to train the model can be biased if it only includes certain types of sentiment. For example, if the training data only includes positive or example, if the training data only includes positive sentiment about military actions, the AI model may be biased towards interpreting all sentiment as positive.
 
Targeted propaganda: Military AI systems can be used to identify and target propaganda towards specific groups of people based on their sentiments. For example, an AI system might be trained to identify individuals who are sympathetic to a particular cause and then target them with propaganda designed to influence their sentiment.
 
Emotional recognition: Military AI systems can also be trained to recognize emotions in individuals, which can be used to predict their behavior. For example, an AI system might be trained to recognize fear or anger in an adversary, which could be used to predict their next move in a conflict.
 
Another risk is the potential for AI to perpetuate existing inequalities, leading to social injustice, and perpetuating conflicts and attritions. For example, if we rely on AI to allocate resources during a crisis, such as medical supplies or food aid, we may inadvertently reinforce existing patterns of inequality. This is because AI models are often trained on historical data, which reflects past patterns of discrimination and bias. If we do not take steps to address these issues, AI may simply perpetuate them.
 
Furthermore, the use of AI during crisis times raises ethical questions about the role of technology in decision-making. Who should be responsible for decisions made by AI models? How do we ensure that these decisions are made in the best interests of society as a whole? These are complex questions that require careful consideration and dialogue.
 
It is also important to consider the potential unintended consequences of AI. For example, if we rely too heavily on AI to make decisions during a crisis, we may inadvertently overlook important factors that only human judgement can account for. Additionally, if AI is used to automate essential services, such as healthcare or emergency response, we run the risk of dehumanizing these services and losing the personal touch that characterize the human resilience.
 

Who we are

Центар за геостратешке студије је  ... један од оснивача

Удружења новинара „Евроазијски форум новинара“, као и „Немачког центра за евроазијске

студије“. Објављује ауторске текстове у српским, руским, немачким и француским медијима.

Будимо у контакту

Our contacts

Adresa
+381654070470
center (@) geostrategy.rs
Knez Mihailova 10 Belgrade 11000

Friends of Site

Youtube kanal