Mitigating sensitive data exposure with adversarial learning for fairness recommendation systems

Abstract Fairness is an important research problem for recommendation systems, and unfair recommendation methods can lead to discrimination against users. Gender is a kind of sensitive feature, exposure sensitive feature can lead to unfair treatment of males and females. However, gender discriminati...
Ausführliche Beschreibung

Gespeichert in:
Autor*in:

Liu, Haifeng [verfasserIn]

Wang, Yukai

Lin, Hongfei

Xu, Bo

Zhao, Nan

Format:

E-Artikel

Sprache:

Englisch

Erschienen:

2022

Schlagwörter:

Recommendation systems

Fairness

Gender bias

Adversarial learning

Anmerkung:

© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022

Übergeordnetes Werk:

Enthalten in: Neural computing & applications - London : Springer, 1993, 34(2022), 20 vom: 11. Juni, Seite 18097-18111

Übergeordnetes Werk:

volume:34 ; year:2022 ; number:20 ; day:11 ; month:06 ; pages:18097-18111

Links:

Volltext

DOI / URN:

10.1007/s00521-022-07373-4

Katalog-ID:

SPR048182893

Nicht das Richtige dabei?

Schreiben Sie uns!