TCI Cluster Evaluation Working Group Workshop
Oslo, Norway, 7-8 September 2017
The TCI Cluster Evaluation Working Group met in Oslo on 7-8 September (started at 12:00 on Day 1, concluded at 13:00 on Day 2), hosted by Innovation Norway ,with 18 participants that shared their experience on cluster programme evaluation.
On the first day, this workshop focused on the challenges of programme-level evaluation – using Norway’s ongoing evaluation as a case example. Main topics and conclusions from the group discussion were:
What do programme level evaluations add to cluster initiative level evaluations?
Programme-level evaluations re-visit and test the relevance of the policy frame/objectives and how the programme has developed over time; explore and evidence the efficiency of the programme’s implementation; and evidence the effectiveness of the overall portfolio of cluster initiatives (including the comparison of results for different types of cluster initiatives, and the dynamics of performance/results over time for the full set of cluster initiatives in the programme).
What are the key aspects that should be included in such evaluations?
- Always need to question the relevance/rationale…is it still there?
- Look at efficiency…results in relation to spending, and possibly results in relation to other programmes
- When exploring effectiveness (i.e. results and longer-term impacts for involved actors), the use of econometric analyses (with control groups) provides more in-depth evidence.
In addition to the performance of companies, research organisations and other actors involved in the individual cluster initiatives, should consider the performance of the broader (regional innovation) system…how the cluster policy/programme/cluster initiative is affecting the system.
Groups discussed how often such programme-level (or impact/effect) evaluations should be conducted, and responses ranged from “every 5 years” in Norway to “after 12 years” in Sweden. (It depends on the level of
investment and relative impact.)
Groups also discussed whether international benchmarking (on programme level and/or on cluster initiative level) should be an aspect of programme-level evaluations.
The response was that programme objectives and effect logics, levels of and approaches to funding, etc. varied quite broadly among cluster programmes. This variation made international benchmarking quite difficult.
What methods/ data can illustrate the systemic effects and wider value of clusters?
Systemic effects (or performance of the broader innovation system) can be viewed on different levels (firm,within cluster, broader system).
Aspects that can be included to illustrate the systemic effects and wider value of clusters include:
- the development/dynamism of collaboration…the relations between the actors
- the development of the innovation ecosystem (around the cluster initiative), including:
* knowledge/relational spillovers
* labour market/mobility
* investments (including FDI)
- leverage of funding and where funding goes (to which actors, and for which type of activities)
- reported “significant happenings” in the region (changes in system, policy, behaviours) and the role that the cluster played.
But also how clusters provide market intelligence to understand what is going on in the regional system…to help improve the cluster policy as well as other policies.
Should / can we account for interactions between cluster programmes & other policies?
Difficult to directly compare or rank cluster programmes relative to other innovation programmes. Rather, more relevant to “map” how target audiences participate in various programmes and how programmes relate/link to each other. (This can be considered part of “systemic effects”.)
Where are opportunities to learn from each other? Does a "standardised" approach make sense?
Although “standardized” approaches to evaluation and international benchmarking are difficult (given differences in programme objectives/logics, etc.), there are still opportunities to learn from each other:
- different approaches to organizing programme implementation.
- different approaches to monitoring and evaluation processes.
Opportunities for learning are primarily in relation to relevance and efficiency (but not so much in relation to effectiveness/performance/results).
At the same time, there is interest in learning more about the survival/lifecycle of clusters in relation to (different) rationales/ objectives of cluster programmes – as both funding and advisory support/strategic intelligence roles.
Do clusters ‘survive’ in the same way? Is the intervention logic/evaluation approach the same in both types of programmes? What’s reasonable to expect at different phases of development…before you get to economic results?
Are there different expectations depending on type of cluster/sector and phase of development?
The group agreed that monitoring and evaluation processes need to be better at elaborating the effect logics for individual cluster initiatives/cluster projects, and that programmes/policies might need to be adapted to specific needs/expectations of different types of initiatives.
“Need to be better at defining the stepping stones…by activity as part of the evaluation strategy.” (Stefan)
“Need to fill in the picture between vision and activities…cluster initiatives need to describe what are THEIR stepping stones?” (Göran)
“Is there a need for segmenting the cluster portfolio…and treating them differently/expecting different things?” (Kristianne)
Programme-level evaluations need to be based on what the cluster programme initially set out to do. Cluster programmes need to be explicit about programme logics (and expected results). Cluster evaluations should follow-up on this (i.e. tracking results), but also explore HOW it happens.
On the second day, the workshop followed-up on previous working group efforts to develop and test a set of common survey questions for cluster firms (including key indicators to capture collaborative dynamics/“the human element” and evidence critical success factors), and take next steps towards a common tool for
capturing the user perspective in cluster efforts.
Following a summary of experience from survey pilots conducted in Australia, Plymouth UK and Colombia, additional experiences were presented from Scotland, Basque Country and Catalonia. Following presentations, the group discussed a number of questions:
What specific revisions could be made to the survey?
- The questions that are included in surveys always depend on the objective of the evaluation. (Most pilots included some “TCI survey” questions among other questions.)
- Survey question D1 (on number and type of collaborative linkages) was perceived as too difficult to answer or excluded. Question C1 was perceived as a given (and could be omitted). C2-5 and D2-3 (perhaps re-phrased – as in Basque country pilot) are viewed as most useful.
- The length of the survey (and questions with multiple sub-questions) is a challenge and limits response rates. Receive higher response rates when integrate firm-level surveys with cluster initiatives’ own
processes (experience from Norway and Basque Country).
- Should strive to find other approaches to accessing similar data.
(From 2018, Innovation Norway plans to move towards non-survey approaches to capture the data on who collaborates with whom – using 3 or 4 different sources of data and big data analytics.)
How can other approaches complement surveys in capturing collaborative dynamics?
- Social network analysis, data on research collaboration, and automatic analysis of emails/social media are alternative sources of data on linkages and collaborative dynamics. (It was suggested to invite experts on new methods of data collection – including text and data mining – to present at upcoming TCI events.)
- Additional tools used for structured follow-up and action research were presented by Stefan Brendstrup. Such tools could be used to frame strategic dialogue processes (by regional offices in
Norway or by action researchers in Sweden/Vinnväxt).
- Overall, the level of engagement of firms and other actors (i.e. willingness to engage in activities and pay for services) is viewed as a good overall proxy for “value of clustering for firms”
With some notable disagreements, overall the group felt that firm-level surveys were a useful tool…particularly if integrated into cluster initiatives’ own strategic processes and programme-level dialogue/development processes with cluster initiatives.
Surveys of firms (the user perspective) provide evidence of what companies are doing (within the cluster initiative) and what value companies perceive – providing cluster organisations with an important input to developing/adjusting the strategy.
Firm-level surveys also provide evidence of clusters’ role as a sounding board for government (in a strategic leadership role in the region).
Group discussion also highlighted the benefit of using statistics on cluster firms’ economic performance (compared with control groups) to help interpret survey results and provide more robust evidence of the benefits of cluster efforts.
However there was also recognition of the limits of surveys, and interest in exploring potential alternative data sources in a ‘big data’ era.
What should be the next steps to bring together the data/experiences that are emerging?
- James and Madeline will lead efforts to compare results from all pilots and draft a revised version of common survey questions (with focus on fewer key questions, structured slightly differently). This will be communicated at the TCI conference in Bogota, and a new round of piloting efforts (including Northern Ireland, Sweden – both TVV and Vinnova, and Schleswig-Holstein).
- Emily will work with Stefan to explore development of additional tools to use for structured follow-up and action research.
In the last session of the meeting, the group reviewed a number of other issues and emerging opportunities for the cluster evaluation agenda, including:
- Review of publications on cluster evaluation
- Evaluation session at the Bogota conference
- Cluster evaluation and smart specialization strategies
- Cluster evaluation ‘beyond GDP’ (shared value and clusters toolkit)
- Cluster evaluation in relation to Agenda 2030 for sustainable development (Moa Eklund from Vinnova to present on this at TCI conference in Bogota.
- Collaboration with next round of European Cluster Observatory: smart guide to cluster evaluation.
Following the TCI conference in Bogota, the next meeting of the Cluster Evaluation working group will be held:
- Either Cork, May 2018…focused on analysis of big data
- Or as part of TCI European Regional Conference in Sofia, Bulgaria (March 19-22)