Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks
Authors: Songül Tolan, Annarosa Pesole, Fernando Martínez-Plumed, Enrique Fernández-Macías, José Hernández-Orallo, Emilia Gómez
JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper we develop a framework for analysing the impact of Artificial Intelligence (AI) on occupations. This framework maps 59 generic tasks from worker surveys and an occupational database to 14 cognitive abilities... and these to a comprehensive list of 328 AI benchmarks... An application of our framework to occupational databases gives insights into the abilities through which AI is most likely to affect jobs and allows for a ranking of occupations with respect to AI exposure. Sections 4, 5, and 6 describe the methodology, data preparation, and application results of this framework, including figures and tables of quantitative analysis. |
| Researcher Affiliation | Academia | The authors are affiliated with the "Joint Research Centre, European Commission" and "Universitat Politècnica de València", with email domains "@ec.europa.eu" and "@upv.es". These indicate affiliations with public research institutions and a university, respectively. |
| Pseudocode | No | The paper describes its methodology in Section 4 using prose and mathematical equations (e.g., Equation 2, Equation 3), but it does not present any explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | All the data, code and results can be found in https://github.com/nandomp/AIlabour |
| Open Datasets | Yes | We combine data from three different sources: two worker surveys: (1) the European Working Conditions Survey (EWCS)11 and (2) the OECD Survey of Adult Skills (PIAAC)12 as well as the Occupational Information Network (O*NET)13. ... For the present framework we generate a comprehensive repository of AI benchmarks (Martínez-Plumed et al., 2020a,b) ... as well as open resources such as Papers With Code16 ... This repository of AI benchmarks is open and accessible3 (Martínez-Plumed et al., 2020a,b). |
| Dataset Splits | No | The paper describes a framework that maps tasks to cognitive abilities and AI benchmarks, using existing worker surveys and AI benchmark repositories. It details data preparation steps like rescaling and averaging scores (Section 5.1), but it does not specify any training, validation, or test dataset splits in the context of machine learning experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to conduct its analysis or computations. While it discusses general factors like 'better hardware' in the context of AI progress, it does not specify any GPU/CPU models, memory, or computing environments used for the study itself. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in its analysis. It mentions platforms and tools like 'Papers With Code' and 'News Finder (Buchanan et al., 2013)' but without version details relevant to reproducibility of their own methodology. |
| Experiment Setup | No | The paper describes a methodology for constructing a framework and analyzing data, including rules for matrix construction, data scaling, and annotation criteria (e.g., 'an ability is assigned to a task if at least two annotators assigned this ability' in Section 4.1). However, it does not provide specific experimental setup details such as hyperparameters, learning rates, batch sizes, or training schedules, as it does not involve training machine learning models. |