PrivilegedAi.com

This Domain is for Sale

Trustworthy artificial intelligence (1) Modified on 2022-03-04 17:08:15 1.3k reads Collection 7 Likes 3 Article tags: artificial intelligence copyright As the application of artificial intelligence (AI) becomes increasingly widespread, the security risks and trust issues it brings have also aroused more and more discussions. As a technology that is developing rapidly, we need artificial intelligence to bring convenience to people without destroying respect for people's basic rights, let alone bringing unpredictable risks to mankind. In recent years, countries around the world have become aware of the possible risks in the application of artificial intelligence, and have begun standardization work and even legislation to ensure the orderly development of artificial intelligence technology. It can be said that trusted artificial intelligence (Trusted AI) is the foundation for the continued development of artificial intelligence in the future. In this article, we will discuss the following aspects of trustworthy artificial intelligence. Because it involves a lot of content, we will discuss the theoretical basis and application of trustworthy artificial intelligence models in three parts. This article is the first part of that. Part One: Fairness and Explainability of Artificial Intelligence Models Part 2 Quality of Artificial Intelligence Models Part 3 Introduction to IBM Watson OpenScale Fairness As artificial intelligence plays an ever-increasing role in business decision-making, there is increasing discussion surrounding the fairness of models. Model fairness aims to eliminate biases in machine learning algorithms, especially biases against individual attributes (such as gender, race, religious beliefs, etc.) that should not have an impact on model output. When discussing fairness, the following definition is often used. Favorable Label refers to the output value that will have a beneficial impact on the follow-up. For example, when predicting the default risk of a loan, the model's output "No Risk" is a favorable label. Because the forecast may help lenders make decisions about approving loans. In contrast, the unfavorable label refers to the output value that will have an adverse impact on subsequent results. For example, the model's output "Risk" is an unfavorable label. Protected attributes refer to attributes that need to ensure that members of each group can be treated equally after being grouped according to this attribute, such as gender, race, religious belief, etc. It should be noted that protected attributes are closely related to application scenarios. There is no universally applicable protected attribute. Protected attributes are also called Fairness Attributes. Privileged refers to protected attribute values ​​that may receive systematic preferential treatment. For example, men may be more likely to obtain loans than women with equivalent qualifications. Therefore, in this scenario, male is a privileged value. The male group is a privileged group. The privileged group is also called the majority.