Statement of the Jagiellonian University’s Rector’s Board on the Evaluation of the Quality of Scientific Activity

“Changes in the evaluation system are necessary to meet the challenges of modern science. The solution we propose allows us to protect the higher education system from conducting another unfair evaluation that promotes mediocrity and generates pathologies,” reads the statement issued by the Jagiellonian University’s Rector’s Board regarding the currently applicable – in the opinion of a significant part of the scientific community, flawed – principles of assessing the quality of scientific activity.

The evaluation of the quality of scientific activity is one of the basic tools for implementing the state’s science policy, and consequently for planning and organizing the work of scientific units in such a way that they effectively achieve their overarching goal, which is scientific excellence. This system should encourage the implementation of ambitious scientific research, foster innovation, and thus increase the impact of Polish researchers on world science, raise the prestige of Polish science in the international arena, and positively influence the development of the economy and the functioning of society.

The evaluation of the quality of scientific activity for the years 2017-2021 revealed numerous flaws in the currently applicable model, leading to a drastically distorted image of Polish science. There is a huge risk that this image will be distorted to at least the same, and perhaps even greater extent, in the ongoing evaluation period for the years 2022-2025. This is due to the fact that the evaluation principles provided for in the Act of July 20, 2018, Law on Higher Education and Science (Journal of Laws of 2024, item 1571) not only do not serve the implementation of the state’s science policy and the development of Polish science, but also stimulate pathologies and strategic errors in the management of universities and scientific units, which result from short-term, ad hoc benefits from the awarded scientific category.

The evaluation was designed as a tool for making decisions about:

  • the distribution of subsidies (one of the components of the algorithm),
  • the rights to award academic degrees and nostrify foreign diplomas,
  • the rights to run fields of study and doctoral schools,
  • the university’s right to participate in the competition for funds under the Excellence Initiative – Research University program.

The current form of evaluation contains a far too strong sanction component, which is not found in any other system of science and higher education. It is contrary to the above-mentioned basic purpose of evaluation, which should be to stimulate the development of Polish scientific units towards scientific excellence.

The concerns of scientific units related to the consequences of lowering the category often cause, both at the management level and in the individual behavior of employees, actions that, instead of developing Polish science, lead to “artificial optimizations” or even unethical actions. As a result, the current evaluation model – instead of supporting the development of science – forces scientific units to subordinate themselves to the rigors of a rigid evaluation system, thereby limiting their autonomy in shaping their own scientific policy. The threat of losing key entitlements, such as the right to award academic degrees, or part of the subsidy, makes evaluation a tool of sanction, not diagnosis and improvement. Meanwhile, its primary goal should be to identify barriers to development and effectively eliminate them.

The flaws of the current evaluation model are:

  • Preference for scientific disciplines for which the number N, i.e., the number of people conducting scientific activity, is small (e.g., 20 people) in relation to disciplines represented by a large number of employees (e.g., 300). The existing system is very unfavorable for large, leading units.
  • Failure to take into account the differences between typically research units (such as the institutes of the Polish Academy of Sciences) and units with a mixed profile (research and teaching – e.g., universities).
  • Equal requirements for research staff and research and teaching staff, when their responsibilities are significantly different. For example, at the Jagiellonian University, a research employee performs tasks in the proportion: 90% research duties and 10% organizational duties, and a research and teaching employee in the proportion: 45% research duties, 45% teaching duties, and 10% organizational duties.
  • Susceptibility of the system to artificial optimization of results, which is a derivative of its algorithmic nature and mechanical conversion of data, which is manifested by:
    • stimulating unethical behavior aimed at “optimizing” indicators at the expense of research integrity, an example of which is the recently described practice of so-called paper mills;
    • promoting so-called staffing optimization, i.e., “transferring” non-publishing academic teachers employed in research or research and teaching positions to teaching positions, and leaving only employees with high scientific activity in these positions. As a consequence, in a significant part of higher education institutions, the situation arises where, in the group of academic teachers, people employed in a research or research-and-teaching position constitute an increasingly smaller percentage of the total number of teachers. This means that classes with students are taught by a growing group of people who, over time, cease to conduct scientific activities, which results in a decrease in the quality of education;
    • promoting so-called disciplinary optimization, which consists of transferring an employee between disciplines or “artificially” creating the so-called category of bi-disciplinary scholars, the purpose of which is to increase “profits” or reduce “losses” for a given discipline. Assigning an employee to a discipline that is not subject to evaluation in the unit is also used.
  • Devaluation of the role of natural teams conducting research in one discipline and rewarding their atomization. Members of such single-discipline teams receive only partial point shares after publishing their research. At the same time, a person with a small substantive contribution, but from another discipline, gets a full set of points for the publication.
  • Flawed linking of the evaluation result, based mainly on the assessment of the average scientific effectiveness of employees, with the rights of individual discipline councils to award academic degrees. This flaw is related, among other things, to the lack of direct translation of the assessment of all employees in the discipline (number N) into the competence of the discipline council, which usually consists of the most recognized scientists in the scientific unit.
  • Using a list of scored journals in the evaluation process, against which numerous, well-founded objections are raised (e.g., arbitrary changes in scoring, inclusion of journals that do not meet the criteria specified by law, changes in the list during the evaluation period).
  • Maintaining the presence in the list of so-called predatory journals, despite widespread and verified knowledge of reprehensible publishing practices in these periodicals, and taking into account articles published in them in the evaluation process.
  • Imposing unnecessary administrative duties on academic teachers, such as the need to regularly submit declarations regarding the inclusion of publications in the evaluation process. At the same time, this obligation may expose universities to losses, for example, in a situation where the publication is the result of research funded by the university, and the appropriate declaration was not submitted, even due to the death or illness of the employee.
  • Significant discrepancy between the results of the evaluation and the position of the university in international rankings, which often show a completely different picture of the quality of scientific activity of the units assessed in these rankings.
  • Mechanical approach to the assessment of scientific activity, resulting in the lack of rewarding outstanding achievements that are evidence of scientific excellence in the evaluation system, e.g.: publications in the most prestigious journals, obtaining the most valuable research grants (e.g., ERC, Dioscuri Centers), or high citation indices, which “disappear” in the averaged and point-based criteria. The effect of the slot-based approach is not an assessment of the excellence of the unit, but its average. It also leads to the demotivation of the best scientists, whose work has a disproportionately small impact on the evaluation result.
  • Stimulating the “overproduction” of scientific publications, which has consequences in the form of excessive expenditure of public funds on financing the publication process (especially in the case of weak open access journals). This effect is contrary to the expectation of an increase in the share of publications in the best journals in the total publications of the unit, which is one of the goals of the IDUB program.
  • Extremely high cost of the evaluation process, both at the central level and at the unit level. It is particularly associated with undertaking multi-parameter optimization of the evaluation result, which is a time-consuming and costly process, forcing the employment of “experts” in optimizing the result and the purchase of computer programs for this purpose, which have little to do with improving the quality of scientific research.

Taking into account these numerous, extremely serious and commonly indicated by the scientific community flaws of the current model, we believe that the evaluation for the years 2022-2025 carried out on the currently applicable principles does not allow for a reliable assessment of the quality of scientific activity, and the involvement of human and financial resources in the preparation and conduct of such an evaluation will result in unnecessary costs. We also want to emphasize that the pathologies revealed in recent years, often directly related to the restrictive evaluation model, have an extremely negative impact on the image of individual universities, as well as the entire Polish system of science and higher education. The mistakes that lead to these pathologies should not be repeated.

Therefore, we recommend:

  • Withdrawing from the evaluation for the years 2022-2025 and maintaining the scientific categories awarded to disciplines as a result of the evaluation for the years 2017-2021, leaving the possibility of voluntary submission to evaluation on the existing principles only to scientific units in which disciplines do not have a category or have a category lower than B+.
  • If it is not possible to withdraw from the evaluation – conducting it according to the applicable rules, while leaving the units with a guarantee of maintaining at least the current categories.
  • Immediate development, testing, and implementation of a new evaluation model that motivates the building of scientific excellence, enables a reliable and measurable assessment of scientific units, and constitutes a real tool for implementing the state’s science policy.

The evaluation for the following years, 2026-2029, should be based on criteria that eliminate the flaws of the current model. The new system should reward significant scientific achievements, measured by publications in the best journals, prestigious grants, international awards, and spectacular implementations of research results. It is also necessary to move away from the algorithm in which the success of the evaluated unit is treated as the sum of the results of individual employees. The quality of scientific research is not determined by the average, calculated in one way or another, but by the achievements of outstanding scientists, true leaders who create spaces of research excellence around them.

In summary, changes in the evaluation system are necessary to meet the challenges of modern science. The solution we propose allows us to protect the higher education system from conducting another unfair evaluation that promotes mediocrity and generates pathologies. On the other hand, developing, based on environmental dialogue and expert opinions, the assumptions of a new model that will effectively reward excellence and support the development of Polish science in the international arena will give a chance to build a more just and effective system of evaluating scientific units.

We encourage you to read the appeal to cease further evaluation on flawed principles, which on January 27, members of the Conference of Rectors of Academic Medical Universities sent to the Minister of Science and Higher Education, Dr. Eng. Marcin Kulasek, as well as the publication “Evaluation of science in Poland – criticism of the current system and proposals for repair”, in which scientists from all three faculties of the Jagiellonian University – Collegium Medicum spoke about the flaws of the system.

Attachments:



Back