Measuring Corporate Language Training Programs for Success

A Little Background on Corporate English Training Programs: As global business started to boom in the 1970s and 1980s international organizations quickly realized communication was going to be the limiting bottleneck of growth. Many cultural idiosyncrasies quickly escalated into worldwide PR crises. Most were rooted in communication and translation mistakes. Corporations needed a solution and standardization for languages.

As global business started to boom in the 1970s and 1980s international organizations quickly realized communication was going to be the limiting bottleneck of growth. Many cultural idiosyncrasies quickly escalated into worldwide PR crises.  Most were rooted in communication and translation mistakes. Corporations needed a solution and standardization for languages. 

Because the US economy was the largest in the world at the time, it became a defacto standard that business would be done in English whenever possible. Corporations scrambled to provide English and English for specific purposes (ESP) training to their global staff. Not only were business communications externally a problem, but even internal global departments struggled to communicate with each other. 

There was no time (or much history) of “best practices.” Therefore, corporate, in-house English training programs have developed largely out of necessity and not science. The variety of corporate English training programs is almost as extensive as the list of international organizations. Most corporate language training programs do not have fixed forms. Each organization determines its program by considering various factors such as: company policy, unique goals of the training, and financial limitations. Additionally, while businesses have evolved significantly over the past thirty to forty years, language training programs have progressed much less.

Key Metrics for an English Training Program

For global English training programs, there should be 3 key metrics: 

Quality – Quality starts with consistency. Ensuring your entire organization gets the equivalent high standard of English education all the time. 

Scalability – Processes and systems management build a solid foundation for English training, but programs must be adaptive and responsive to the ever-changing workplace, environment, and best practices. 

Measurability – Identifying important KPIs for your English program such as total cost of ownership, benefit, capacity, and engagement will help you steer the development and improvement of your program.


Evaluating a Corporate English Training Program

Evaluation is an important part of quality assurance, measurability and improvement. Evaluations can help: 

  • Identify what’s working and what’s not. 
  • Determine project outcomes and efficiencies. 
  • Improve staff and teachers. 
  • Add to existing knowledge base.
  • Identify bad or unscalable practices. 
  • Infuse new methods and pedagogy.

The data and methods required to evaluate the effectiveness of corporate English training programs ought to be informed by well-established, scientific methods that have evolved in the science of psychology and behavior as well as training and pedagogy.  The effective use of data for program improvement assumes that corporations have implemented an internal (or external) evaluation system. The collection of accurate and relevant data points for English training programs is often forgotten or overlooked. Although new practices have emerged through scientific and research, many corporate English programs remain unchanged. 

As the world gains in knowledge the educational standards and practices continue to change, individuals continue to evolve, and their learning capacity deviates. One of the most validated evaluation methods comes from Daniel Stufflebeam called CIPP and originated in the 1960s. It has since been updated (as recently as 2007), but the model has not changed much – context, inputs, process, and product (CIPP).

  • Context – What should we do? 
  • Inputs – How should we do it? 
  • Process – Are we doing it as planned? 
  • Product – Did the program work?


By measuring the actual outcomes and comparing them to the anticipated outcomes, decision-makers are better able to decide if the English training program should be continued, modified, or dropped altogether. Evaluation is the process of determining the extent to which objectives are attained. It is concerned not only with the appraisal of achievement but also with the improvements. 

There are two ways of doing evaluation including formative and summative evaluation. Formative evaluation is information that will be used for improving the instruction, project, and process and ensures that all aspects of a program or project are likely to produce success (Ebel & Frisbie, 1991). It is conducted to monitor instructional processes and learning progress to provide continuous feedback that identifies learning errors (Gronlund, 1985). Summative evaluation ensures whether necessary processes have been carried out and objectives are being met. Both summative and formative evaluations take place whenever an evaluation exercise is conducted. (Journal of Education and Education Development, Vol 5. No 1. 2018)

Using CIPP for Corporate English Program Evaluation

Stufflebeam’s model includes both the formative evaluation and the summative evaluation.

The CIPP Evaluation Model begins with Context Evaluation, which establishes the goals of the program. At this stage, the beneficiaries and their needs are also identified, along with potential resources available on hand, and potential problems that will need to be overcome. At this stage, the background of the program will need to be evaluated, and any social, economic, political, geographical, cultural factors within the immediate environment are to be accounted for.

Context Evaluation (C): provides information for the development of and evaluation of mission, vision, values, goals and objectives, and priorities

A. Purposes

(1) define the characteristics of the environment
(2) determine general goals and specific objectives
(3) identify and diagnose the problems or barriers which might inhibit achieving the goals and objectives

B. Tasks

(1) define the environment, both actual and desire
(2) define unmet needs and unused opportunities
(3) diagnose problems or barriers

C. Methods

(1) conceptual analysis to define limits of population to be served
(2) empirical studies to define unmet needs and unused opportunities
(3) judgment of experts and clients on barriers and problems
(4) judgment of experts and clients on desired goals and objectives

At the next stage of the CIPP Evaluation Model, Input Evaluation encompasses the program plans/planning. Stakeholders will need to be engaged and suitable strategies of program execution identified. Competing or conflicting strategies may also be identified. A budget will need to be allocated and suitably portioned off. To ensure enough coverage of the training program, research may also have to be carried out.


Input Evaluation (I): provides information for the development of program designs through evaluation of databases, internal and external stakeholders’ interests, WOTS up? (Weaknesses, Opportunities, Strengths, and Threats).

A. Purposes

(1) design a program (intervention) to meet the objectives
(2) determine the resources needed to deliver the program
(3) determine whether staff and available resources are adequate to implement the program

B. Tasks

(1)   develop a plan for a program through examination of various intervention strategies

(a)    examine strategies for achieving the plan

    • time requirements
    • funding and physical requirements
    • acceptability to client groups
    • potential to meet objectives
    • potential barriers

(b)   examine capabilities and resources of staff

    • expertise to do various strategies
    • funding and physical resources
    • potential barriers

(2)   develop a program implementation plan which considers time, resources, and barriers to overcome

In the Process Evaluation stage of the CIPP Evaluation Model, the actual actions are evaluated. This can be cyclic, repeated throughout the development stage, or during the implementation/execution of the training program. Controls to monitor the progress will have to be in place, as well as a system for feedback from learners and stakeholders, and vice versa.


Process Evaluation (P): develop ongoing evaluation of the implementation of major strategies through various tactical programs to accept, refine, or correct the program design (i.e. evaluation of recruitment, orientation, transition, and retention of first year students).

A. Purpose

(1) provide decision makers with information necessary to determine if the program needs to be accepted, amended, or terminated.

B. Tasks

(1) identify discrepancies between actual implementation and intended design
(2) identify defects in the design or implementation plan

C. Methods

(1)   a staff member serves as the evaluator
(2)   said person monitors and keeps data on setting conditions, program elements as they occur
(3)   said person gives feedback on discrepancies and defects to the decision makers

Finally, the Product Evaluation stage of the CIPP Evaluation Model measures outcomes. The impact/reach of the training program, and its effectiveness in fulfilling the objectives. Transportability seeks to determine if the training program can be transferred, adapted, or used in a different setting. Sustainability is another aspect to be measured, accounting for how durable and long-lasting the benefits were. Adjustments to the training program may also need to be performed at this stage. 

Product Evaluation (P): evaluation of the outcome of the program to decide to accept, amend, or terminate the program, using criteria directly related to the goals and objectives (i.e. put desired student outcomes into question form and pre- and post-survey). Loop back to the original objectives in the Context Evaluation (C) to see if and how these would be changed or modified based on the data.

A. Purpose

(1) provide decision makers with information necessary to determine if the program needs to be accepted, amended, or terminated.

B. Tasks

(1) develop the assessment of the program

C. Methods

(1) traditional research methods, multiple measures of objectives, and other methods

For more reference and information I would like share this checklist from Daniel Stufflebeam for applying the CIPP Model stating:

“This checklist is designed to help evaluators evaluate programs with relatively long-term goals. The checklist’s first main function is to help evaluators generate timely evaluation reports that assist groups to plan, carry out, institutionalize, and/or disseminate effective services to targeted beneficiaries. The checklist’s other main function is to help evaluators review and assess a program’s history and issue a summative evaluation report on its merit, worth, probity, and significance, and the lessons learned.”

“This checklist has 10 components. The first—contractual agreements to guide the evaluation—is followed by the context, input, process, impact, effectiveness, sustainability, and transportability evaluation components. The last 2 are metaevaluation and the final synthesis report. Contracting for the evaluation is done at the evaluation’s outset, then updated as needed. The 7 CIPP components may be employed selectively and in different sequences and often simultaneously, depending on the needs of particular evaluations…


“The concept of evaluation underlying the CIPP Model and this checklist is that evaluations should assess and report an entity’s [merit, worth, probity, and significance,] and should also present lessons learned… The model’s main theme is that evaluation’s most important purpose is not to prove, but to improve.

“Timely communication of relevant evaluation findings to the client and right-to-know audiences is another key theme of this checklist… Following [a] feedback workshop, the evaluators should finalize the evaluation reports, revise the evaluation plan and schedule as appropriate, and transmit to the client and other designated recipients the finalized reports and any revised evaluation plans and schedule.”


Beyond guiding the evaluator’s work, the checklist gives advice for evaluation clients. For each of the 10 evaluation components, the checklist provides checkpoints on the left for evaluators and checkpoints on the right for evaluation clients.

Experience certified language ability for yourself

Emmersion is a fully automated and adaptive language assessment engine for certifying speaking, writing, and grammar ability in 9 global languages with immediate results. Click below to try a free Emmersion assessment for yourself.

Request a Demo


Leave a Comment

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies.