:: ECONOMY :: TECHNOLOGICAL METRICS FOR MEASURING THE IMPACT OF ARTIFICIAL INTELLIGENCE :: ECONOMY :: TECHNOLOGICAL METRICS FOR MEASURING THE IMPACT OF ARTIFICIAL INTELLIGENCE
:: ECONOMY :: TECHNOLOGICAL METRICS FOR MEASURING THE IMPACT OF ARTIFICIAL INTELLIGENCE
 
UA  PL  EN
         

Світ наукових досліджень. Випуск 40

Термін подання матеріалів

24 квітня 2025

До початку конференції залишилось днів 8



  Головна
Нові вимоги до публікацій результатів кандидатських та докторських дисертацій
Редакційна колегія. ГО «Наукова спільнота»
Договір про співробітництво з Wyzsza Szkola Zarzadzania i Administracji w Opolu
Календар конференцій
Архів
  Наукові конференції
 
 Лінки
 Форум
Наукові конференції
Наукова спільнота - інтернет конференції
Світ наукових досліджень www.economy-confer.com.ua

 Голосування 
З яких джерел Ви дізнались про нашу конференцію:

соціальні мережі;
інформування електронною поштою;
пошукові інтернет-системи (Google, Yahoo, Meta, Yandex);
інтернет-каталоги конференцій (science-community.org, konferencii.ru, vsenauki.ru, інші);
наукові підрозділи ВУЗів;
порекомендували знайомі.
з СМС повідомлення на мобільний телефон.


Результати голосувань Докладніше

 Наша кнопка
www.economy-confer.com.ua - Економічні наукові інтернет-конференції

 Лічильники
Українська рейтингова система

TECHNOLOGICAL METRICS FOR MEASURING THE IMPACT OF ARTIFICIAL INTELLIGENCE

 
22.02.2025 00:16
Автор: Iryna Turchenko, PhD, Associate professor, West Ukrainian National University; Lu Qiwei, Xu Baoxi, Jiang Peng, master`s students, West Ukrainian National University
[2. Інформаційні системи і технології;]

ORCID: 0000-0002-9441-6669 Iryna Turchenko

The introduction of artificial intelligence into various fields of activity requires its evaluation based on various metrics – technological, economic, social, etc. Technological metrics play an important role among them.

By technological metrics [1], we mean quantitative indicators that allow us to assess artificial intelligence's productivity, accuracy, and efficiency. They help developers and researchers determine how well a model performs its tasks, how fast it processes data, how well it generalizes knowledge, and whether it is biased. Without clear metrics, it is impossible to compare different algorithms, improve their performance, and ensure the reliability of AI in real-world scenarios. 

The choice of the right metric depends on the specifics of the task, as no single metric can fully characterize the quality of the model in all aspects. Model accuracy and quality metrics assess how well artificial intelligence performs its tasks, such as classification, regression, or data generation. For classification models, Precision is used, which shows the proportion of correct positive predictions, Recall, which determines how many real positive cases were found, and their harmonic mean, the F1-score. Regression models are evaluated by Mean Squared Error (MSE) or Mean Absolute Error (MAE), which measure the average difference between predicted and actual values. For language and generative models, BLEU, ROUGE, and METEOR are used to compare the generated text with the reference text. 

The second class of metrics includes performance and efficiency metrics that measure how fast and optimally an AI model performs, which is especially important for real-world applications. 

Latency determines how long it takes for the model to receive and process input data, which is critical for real-time systems, such as in finance or medicine. 

Throughput reflects how many operations or queries the model can perform per second, which is important for scalable services. 

Evaluation of computing resource utilization (CPU, GPU, RAM) helps to understand how efficiently the model uses available hardware and power consumption. 

Optimization of these metrics allows achieving better performance without significant resource consumption, making AI systems faster, more stable, and more cost-effective.

Generalizability and model robustness metrics are also important, assessing how well an AI system adapts to new, unfamiliar data and how stable its results are under different conditions. 

Overfitting and Underfitting show whether the model memorizes training data instead of generalizing patterns or, conversely, whether it is too simple for high-quality forecasting. 

Robustness evaluates how the quality of predictions changes when noise or attacks are introduced to the model, which is especially important in the field of cybersecurity and autonomous systems. 

Fairness determines whether the model discriminates against certain groups of users due to the uneven distribution of training data. 

Assessing these metrics helps to create more reliable and ethical AI solutions that can be used in critical areas. Various methods are used to measure technology metrics, which allow to evaluate the performance, accuracy, and reliability of AI models in different conditions. 

Test and validation datasets are used to evaluate the quality of a model on data it hasn't seen during training, which helps to avoid overtraining. A/B testing allows comparing two or more models to determine which one performs better on specified metrics in a real-world environment. 

Real-time performance monitoring includes the analysis of latency, throughput, and computing resource utilization in working systems, which allows you to identify problems in time and improve efficiency. 

The combination of these methods helps developers choose the best models and ensure their stable operation after implementation.

Measuring technology metrics in artificial intelligence faces a number of challenges that make it difficult to objectively evaluate models. One of the key challenges is the selection of relevant metrics, as different tasks require different approaches to evaluation, and no single metric provides a complete picture of performance. 

The tradeoff between accuracy and speed is also a major issue, as more complex models may provide better quality but may also be slower or consume more resources. Another challenge is the dynamism of AI systems, as models can change their performance over time due to changes in input data, which requires constant monitoring and reassessment of metrics. 

In addition, ethical aspects and biases in training data can affect the results, making the model less fair or reliable for certain user groups. Addressing these challenges requires an integrated approach that combines automated analysis, real-world testing, and ethical oversight.

Thus, technological metrics play a key role in measuring the effectiveness of artificial intelligence, allowing to assess its accuracy, performance, stability, and generalizability. It is important to choose the right metrics for a specific task, as no single metric is universal for all models. Measurement methods such as validation set testing, A/B testing, and real-time monitoring help to obtain objective results and improve models. However, there are challenges, such as the trade-off between accuracy and speed, the need for constant monitoring, and the risk of algorithmic bias. Therefore, achieving high quality and reliability of AI systems requires a comprehensive approach that includes not only improving technical aspects but also ethical control and adapting models to changing conditions of use.

References

1. Demystifying AI/ML Performance Metrics: A Guide to Building High-Impact Models – URL: https://svitla.com/blog/ai-ml-performance-metrics 



Creative Commons Attribution Ця робота ліцензується відповідно до Creative Commons Attribution 4.0 International License

допомогаЗнайшли помилку? Виділіть помилковий текст мишкою і натисніть Ctrl + Enter


 Інші наукові праці даної секції
МИСЛЕННЄВИЙ ЕКСПЕРИМЕНТ КАРТЕЗІАНСЬКОГО РОБОТА ЯК ПЕРША ФОРМА ТЕСТУ Т’ЮРИНҐА
12.02.2025 11:48




© 2010-2025 Всі права застережені При використанні матеріалів сайту посилання на www.economy-confer.com.ua обов’язкове!
Час: 0.227 сек. / Mysql: 1717 (0.179 сек.)