We used a web-based Delphi survey to collect consensus-based proof on which to judge CanMEDS key competencies, first as possible after which as constant for workplace-based assessments in Flemish basic practitioner (GP) coaching, in Belgium [13,14,15]. Primarily based on obtainable literature, we mentioned and selected the required steps to make sure methodological rigor. Desk 1 supplies an outline of design steps based mostly on the Steerage on Conducting and Reporting DELphi Research (CREDES) tips [16]. We additional elaborate our methodological selections making an allowance for the CREDES design steps.
Research design
We selected to make use of an e-Delphi to recruit panelists from totally different geographical areas in Flanders and to succeed in a bigger group in a cheap means. The web type was additionally most popular as this research happened throughout the COVID-19 pandemic. We outlined feasibility as what may be noticed within the office and whether or not the competence formulation is appropriate for workplace-based evaluation. We outlined consistency as what may be constantly noticed throughout totally different coaching settings and phases of the office (Fig.1) [13,14,15]. Consensus was outlined as 70% of respondents agreeing or strongly agreeing {that a} subject was possible or constant for evaluation within the office [17]. Non-consensus was outlined as lower than 70% of respondents agreeing or strongly agreeing, and no main change in consensus rankings or ideas for modifications from the panel after 2 rounds.
Definition of analysis standards for the Delphi research
To ensure the reiterative nature of our research, we determined to set a minimal variety of three rounds [18, 19]. After every Delphi spherical by which consensus was reached for a CanMEDS key competency, the latter was not supplied for analysis. Though the standard Delphi methodology begins with an unstructured spherical, we selected to observe a semi-structured method since our important purpose was to validate the predefined CanMEDS framework. [4]. Due to this fact, we used a mixture of closed and open questions [20].
Within the first spherical, panelists had been requested to fee the CanMEDS key competencies as attainable and constant based mostly on a 5-point Likert scale. They had been additionally in a position to present qualitative feedback on every key competency [7, 14]. Within the second spherical, we knowledgeable the panelists in regards to the consensus assessments of spherical 1. On this spherical, the panelists had been requested to formulate concrete proposals for modifications and assess the 2 analysis standards individually. A doc was additionally added that addressed the problems that arose in Spherical 1 based mostly on qualitative feedback. To offer some readability on the wording of the competencies, the CanMEDS enabling competencies for every key competency had been made obtainable to help the panel with their proposals. As well as, we listed and categorized essentially the most frequent qualitative feedback to supply an outline. Selections on modifications to key competencies had been clearly communicated. We once more requested the panel to fee the CanMEDS key competencies as possible and as constant for workplace-based evaluation based mostly on the 5-point Likert scale.
Within the third spherical, we offered summaries of the assessments from the earlier rounds. On the panelists’ request, we included a listing of examples of how every CanMEDS key competency would switch to the office. On this remaining spherical, we requested panelists whether or not or not they agreed {that a} CanMEDS key competency was possible and constant for evaluation within the office. If not, they had been required to specify the explanations for abstaining [15]. Determine 2 exhibits an outline of the three Delphi rounds.

Flowchart of the three Delphi rounds
Research surroundings
To create a coherent method throughout Flanders, 4 Flemish universities (KU Leuven, College of Ghent, College of Antwerp and the Flemish Free College of Brussels) have created an inter-university curriculum for the medical training, which consists of three phases. Sensible coordination and decision-making concerning the curriculum is the accountability of the Interuniversity Middle for GP Coaching (ICGPT). Amongst different issues, ICGPT is answerable for allocating medical internships, organizing exams, holding conferences with basic practitioners each 14 days with tutors and dealing with interns’ studying portfolios, the place analysis of competencies is recorded.
Number of panel
To pick panelists, we adopted purposive sampling [13, 21]. We set three choice standards: 1) have enough expertise as a basic practitioner (>3 years’ expertise), 2) have expertise in guiding and assessing interns within the office, 3) have enough time and willingness to take part [7, 22]. Seventy panelists had been invited by the Principal Investigator (BS) by way of e-mail. As a way to embrace a variety of views, the panel consisted of each training lecturers and training lecturers [23]. Working towards tutors had been workplace-based tutors who assisted trainees throughout their internship, whereas trainee tutors had been hooked up to a college who offered steerage and facilitated peer studying and assist (1015 trainees per group) twice a month. Each teams had been answerable for assessing trainees within the office. Panelists stayed in several provinces in Flanders to attenuate converging concepts and to make sure reliability [13, 23]. Though there is no such thing as a consensus on an applicable pattern measurement for a Delphi design, various 1530 panelists might present dependable outcomes [23, 24]. In our research, we chosen panelists who had acquired the identical medical background and have basic understanding within the space of curiosity. As well as, to find out the pattern measurement, we thought of feasibility parameters to attain a great response fee, akin to permitting giant time slots for every Delphi spherical and cheap time for completion.
Improvement and pilot of Delphi survey
The 27 CanMEDS key competencies had been translated from English to Dutch as a result of the panel was Dutch-speaking. Determine 3 graphically illustrates how the Delphi survey was constructed. First, the CanMEDS competencies had been translated by 5 researchers individually. After discussing and evaluating all translations, we determined to maintain the Dutch translation as shut as attainable to the unique English framework. Second, to validate the interpretation and check the instrument, we despatched it to a bunch of physicians to touch upon it. Third, as soon as suggestions was acquired and the Dutch translation was accomplished, the Dutch model of the framework was back-translated into English to verify the accuracy of the interpretation [25].

Course of steps for developing the Delphi survey
Every Delphi spherical consisted of an introductory part, the CanMEDS key competency analysis, and a concluding part. Within the introduction, the aim of every spherical was defined and resolution guidelines had been communicated. We added the closing part to permit the panel area for communication and suggestions unrelated to the CanMEDS core competencies (eg, time required for completion, structure feedback). To keep away from confusion between the totally different CanMEDS roles, the important thing competencies had been grouped by position. Determine 4 exhibits how the survey factors had been displayed earlier than consensus was reached.

Show of Survey Objects for Delphi Spherical 1
Knowledge assortment and evaluation
To gather our information, we used the Qualtrics XM platform. This on-line device made it attainable to take care of anonymity among the many panelists [26]. A private hyperlink was emailed to every panelist. This made it attainable to trace response charges and ship reminders to particular members. Because of excessive workload brought on by the COVID-19 pandemic, every spherical lasted 4 weeks. We selected a versatile method to the panelists to extend the response fee for every spherical. Weekly reminders had been despatched to members who had not accomplished the survey [26]. Knowledge assortment happened between October 2020 and February 2021. To investigate quantitative information, we calculated descriptive statistics for every merchandise utilizing SPSS 27 (IBM SPSS Statistics 27). We used Microsoft Excel to checklist and categorize qualitative information. The panelists’ feedback had been recorded anonymously and verbatim. For the evaluation of qualitative information, we used content material evaluation [27].
The position of the analysis staff to forestall bias
Methodological selections made by the analysis staff had been in step with the obtainable literature. We pre-defined and established methodological steps earlier than beginning the research. We utilized, monitored and evaluated these steps throughout the research. The outcomes of every spherical had been mentioned by the analysis staff, whereas qualitative information had been interpreted by two researchers for researcher triangulation [28].