This section suggests:
- Why evaluation is important
- What should be evaluated
- The methodologies and tools for evaluation
- The challenges, concerns and ethical considerations you may encounter when evaluating a project of this type
Evaluation is often the final step in the process of creating an institution as publisher operation. Since everyone is involved in the process, it is likely that everyone will have an interest in evaluation outcomes. However, those in a project management role, in university leadership, researchers of pedagogical innovations/ learning technology/ or publishing, and, more generally, those interested in their university becoming publishers are likely to benefit most directly from reading this section.
This section of the toolkit will consider the following:
- The development process, through observation and reflection, including e-textbook authoring, technical activities, publishing process and ongoing promotion
- Feedback from stakeholders through survey, dialogue, and data
- Impact on learning and teaching, of the publications, through observation, and anecdotal discussion with students and teachers
- Challenges of evaluation, with an overarching reflection on how evaluation may contribute to improvement and streamlining of future e-textbook developments
- The tools for evaluation, in order to better understand the range of data, feedback, dialogue and outcomes from the project
- Why and what to evaluate?
- Considering the challenges of evaluation
- Using tools for evaluation
In the context of a project, the process of evaluation offers perspective on achievement, and commentary on further development. It allows individuals associated with the process and/or outcomes of a project the opportunity to discuss and interpret their experience and/or outputs.
In the context of an institutional publishing project, three key areas should be investigated. These are:
Perspectives on the development process
This includes e-textbook authoring, technical activities, publishing process and ongoing promotion. The UHI/Napier project team reflected on the impact of the process for those involved in it. Finally, the team discussed the value and consumption of e-textbooks for students, academics, module leaders, libraries, for the university, and for a wider audience. Some questions to consider, might be:
- What are the resources needed by the publishing team? What are the resources available to them? Were the tools sufficient for the task?
- Is sufficient time allocated to each part of the development process?
- How might you demonstrate an improvement in development knowledge?
- How cohesive is your development team?
What arose to the UHI/Napier team? Time. Commitment. Constraints. Costs. Institutional. Structure. Collaboration. Priorities. Outcome. Change. Professional. Adaptive. Pioneering.
You can read more about this in questions about development.
Interpreting feedback from stakeholders
This includes students and educators, and those in a university management role, as well as from a broader range of individuals with access to the publications. The UHI/Napier team garnered feedback from surveys, through dialogue with groups, examined download access data, and viewed anonymous reviews and feedback. In broad terms, evaluation should examine the following questions:
- Who are your stakeholders?
- What do they think of your publications and process?
- What can be learned that may help further developments?
What arose to the UHI/Napier team? New. Unexpected. Different. Dull. Searchable. Mine. Download. Reference. Cheap. Relevant. Accessible. Simple. Raw. Text. Authors. Learning.
More information is available in the document on how stakeholders can focus your evaluation.
Observing impact on learning and teaching
This may be difficult across a top-down, short project, but over time, who might be best placed, and who might actually, pioneer such an initiative? The UHI/Napier team have reflected on the possible reasons for this.
UHI/Napier’s evaluation suggested that it was the choice of individual students to use our publications, rather than those in a teaching role. Some questions that your evaluation might consider are:
- What does ‘impact’ look like in my circumstance? Might it be represented by a rise in grades or by something less tangible like increased independence for disabled readers?
- How successful is distributed to academic and students?
- What is revolutionary about my development?
The Jisc institution as e-textbook publisher project teams have written a number of case studies on observing impact. The University of Liverpool have looked at how they embedded Using Primary Sources in the Liverpool Curriculum.
What arose to the UHI/Napier team? Opportunity. Together. Relevance. Distribution. Empowerment.
Additional material is available in our document looking at teaching and learning.
What is your role in the production team, as evaluator? What lines do you draw in order to ensure the best interests of the institution as e-textbook publisher team, the product, and above all the students? The original report can be found in the application for cross-university ethical approval.
What do we evaluate?
An evaluator is often tasked to measure specific things and answer the question “is it worth it?” However, evaluators are also positioned to see and report on the unexpected, and what justifies the existence of an institution as e-textbook publisher is not always easy to measure. Read these documents and ask yourself whether, why, and how you might capture this point of view for evaluation: feedback from an author (1) and feedback from an author (2).
An article presenting the author’s view was also published as part of the project: Hogg, J. (2017). Creating a new type of e-textbook: Using Primary Sources. Insights, 30(1), 53–58. http://doi.org/10.1629/uksg.344
When you are producing and selling your own textbooks, the lines between things like commercial product sales, academic practice, and student experience can become blurred. The world of an institution as e-textbook publisher evaluation can challenge even an experienced researcher (see this example of UCL Press students and staff survey).
No matter what kind of data you are after, chances are your survey has to serve a lot of masters. So before you ask someone a single question, there are a few you have to ask yourself.
The tools contained here were designed for evaluation in a proof-of-concept capacity. This means they are not solely focused on the performance of an institution as publisher. They are meant to observe an institution as e-textbook publisher’s potential to grow, be sustainable, and meet the needs of a particular university.
You can adapt these tools in your institution, with the proper attributions. Each link contains a description of the tool and how it is used, visual examples (with annotations), and an inventory of relevant documents.
- Benchmarking – looking at your book in comparison to others
- Reader engagement survey – how do your students engage with materials before they see an IAP textbook? (survey only, for methodology see “the final survey” link)
- The final survey – how do you find out what your students thought of an Institution as e-textbook publisher textbook?
- The project reflection matrix – how does your team develop, manage, and grow its publishing?
- The resource profiling survey – what’s the “real cost” of doing business?
We recommend that any institution as publisher project finds opportunities to investigate three themes:
- The development journey of its publications
- The feedback from stakeholders about what it has achieved
- The impact of its publication on learning and teaching
Evaluators should seek mixed method approaches of data collection. Surveys and dialogue offer direct input from students and authors. Distribution data (for example, from a university’s library or from Amazon and Smashwords), may uncover how much a publication is being used, and by whom.