Browsing by Keyword "Digital competence"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Design and Validation of a Novel Tool to Assess Citizens’ Netiquette and Information and Data Literacy Using Interactive Simulations(2022-03-14) Bartolomé, Juan; Garaizar, Pablo; ADV_INTER_PLATUntil recently, most of the digital literacy frameworks have been based on assessment frameworks used by commercial entities. The release of the DigComp framework has allowed the development of tailored implementations for the evaluation of digital competence. However, the majority of these digital literacy frameworks are based on self-assessments, measuring only low-order cognitive skills. This paper reports on a study to develop and validate an assessment instrument, including interactive simulations to assess citizens’ digital competence. These formats are particularly important for the evaluation of complex cognitive constructs such as digital competence. Additionally, we selected two different approaches for designing the tests based on their scope, at the competence or competence area level. Their overall and dimensional validity and reliability were analysed. We summarise the issues addressed in each phase and key points to consider in new implementations. For both approaches, items present satisfactory difficulty and discrimination indicators. Validity was ensured through expert validation, and the Rasch analysis revealed good EAP/PV reliabilities. Therefore, the tests have sound psychometric properties that make them reliable and valid instruments for measuring digital competence. This paper contributes to an increasing number of tools designed to evaluate digital competence and highlights the necessity of measuring higher-order cognitive skills.Item A Pragmatic Approach for Evaluating and Accrediting Digital Competence of Digital Profiles: A Case Study of Entrepreneurs and Remote Workers: A Case Study of Entrepreneurs and Remote Workers(2021-04-29) Bartolomé, Juan; Garaizar, Pablo; Larrucea, Xabier; ADV_INTER_PLAT; Tecnalia Research & InnovationDuring the last decades, digital competence has become essential at workplace. Nowadays, it is difficult to find a job where no ICT skills are required. At the same time, there is a lack of ecosystems for adult reskilling in digital competence. Moreover, most of them do not use of a common language and terminology, decreasing the possibilities of being used by a wider public. In addition, the assessment of digital competence cannot be done using simple self-assessment tests, but more complex tools such as simulations or other activities based on real scenarios. Considering this, we designed a performance-based evaluation system following a pragmatic approach based on DigComp framework. We carried out a needs analysis based on expert consultation (63 teleworkers and 82 entrepreneurs) to create an assessment syllabus and implement the assessment modules. Then, we conducted an analysis by experts (n=21) of the relationship between the content of the tests and the construct it was intended to measure. After refinement, the system was piloted by end-users all over Europe (n=525). Results confirmed that DigComp was the most appropriate reference when considering the transversality of digital competence, providing researchers with well-defined clear criteria.Item Validating item response processes in digital competence assessment through eye-tracking techniques(ACM, 2020-10-21) Bartolomé, Juan; Garaizar, Pablo; Bastida, Leire; Garcia-Penalvo, Francisco Jose; ADV_INTER_PLATThis paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item.