Wednesday, November 30, 2011

Innovation Waits for No One!




By Keith McDowell

“Now is not the time!” Have you ever been given that line? Even better, did you know that the classical concept of “now” that we all live by was forever destroyed by Einstein in 1905? That’s right! Two events that are simultaneous or occur at the same time in your personal reference frame occur at different times for someone who is speeding by you in their car. Wow! Talk about getting a person to church on time as famously crooned and celebrated in My Fair Lady. Obviously, the incremental difference in “now” is so tiny that we don’t notice it – unless, of course, you’ve purchased for a Christmas present the latest in a near “light speed” automobile.

But then “time waits for no one.” Or does it? Can we slow down time by speeding away toward a distant galaxy? Well yes – relative to the reference frame of Earth, but it won’t change your personal perspective on the “passage of time.” And then there are those folks who race through life, hoping to slow down time and catch a few more moments. Good luck on that approach!

What about innovation? Does it wait for us to “find the time” or does it wait for no one? I think the latter. And therein lies a problem for America. We’ve created an innovation ecosystem with moving parts or processes that waste time checking for conformity to accepted norms or established patterns of behavior. It’s an authoritarian-gatekeeper system guaranteed for the most part to replicate the norm and produce “me too” research and incremental innovation. We like to pretend that it is an “open system” where discoveries and innovations constantly bubble up to the surface of our conscious “now” – a system where the best and the brightest quickly reach the frontiers of the creative mind through independent research and innovation. But the reality is often different. And at the heart of the problem on the discovery end of the creation pipeline is one of society’s and academe’s oldest control mechanisms: peer review.

What is “peer review” and how does it affect the innovation ecosystem? “Peer review” is a simple concept. The notion is that one’s performance, whether as an individual or a collection of individuals, should be evaluated by one’s peers. It’s a practice carried out routinely in our legal system using a jury of “peers.” In academe, the practice takes many forms including principally the following activities:

  • Refereed publications
  • Grantsmanship
  • Tenure and promotion
  • Post-tenure review
  • Program review

To the consternation of many of our best researchers, these activities have grown over the past decade or two to the point that they have pushed aside the time needed to think creatively and be innovative. As Daniel J. Meyer stated succinctly in an article from The Chronicle of Higher Education: “It’s getting impossible to produce my own work I’m spending so much time assessing others!” He further states that “I have many comrades (not ‘in arms’ yet, but it is coming) who are experiencing an unbearable overload of review duties. … Draconian measures, you say? Perhaps. But maybe this is a Drago we should embrace. If not, we are going (to) [sic] take an ailing peer-review system and kill it outright.”

These are strong sentiments and they are shared by many including the author. But can we document the reality in the hope of finding a remedy? It’s tough to do. For example, let’s examine post-tenure review.

Post-tenure review came into vogue in the late 1990s as an accountability or audit tool to satisfy politicians and legislators that someone was looking over the shoulder of tenured faculty members to make sure they continued to be productive following tenure. Typically, on a timescale of five to eight years, tenured faculty members prepare a massive dossier documenting their performance including student teaching evaluations. Often, external letters are solicited. Depending on the review, corrective actions might be taken including a change in teaching load, a reduction in research space, or a host of other such actions. It’s the tenure process redux. And in most institutions, the data gathering is now formalized through the maintenance of a yearly faculty activity report. Woe unto the faculty member who doesn’t login and update his or her data profile in a timely manner!

The demand for such performance data and accountability has become a battle cry for some elements of the right-wing conservative movement in America. The O’Donnell brouhaha in Texas comes to mind in that regard. But let’s be clear. While I support post-tenure review and the use of the faculty annual report, they represent a new element for the innovation ecosystem and they consume time – lot’s of it.

And then there are program reviews. Once again, the accountability and audit mentality dictated that university programs should be reviewed on a regular basis with a cycle time of five to eight years. Massive reports are created and external reviewers are conscripted – usually with the bribe of a stipend – to pass judgment on a program or department. Based on such data analysis, the Texas Higher Education Coordinating Board has determined that a number of “underperforming” physics programs should be shut down in Texas. Hmm, that should be a real motivator for poor and disadvantaged STEM students in those affected areas! Have we just turned off the next Michael Dell? Has their concept of “now” turned into “yesterday”?

Aside from the recent appearance of post-tenure review and program review, we’ve had in place since before World War II the process of reviewing research and scholarly manuscripts as a means to generate “refereed” publications. I’ve spoken to that issue previously. But what are the hard numbers? In Figure 1, I display the growth in the number of science and engineering publications using recently published data from the National Science Foundation. Over the twenty-year period from 1988 to 2008, the number of such publications nearly doubled and likely has now passed the one million mark per year. That’s a lot of papers to review for the science and engineering community!

Figure 1

With respect to grantsmanship and the peer review of proposals, the data appear to show some measure of saturation over the past decade. Using data taken from the annual Merit Review Reports to the National Science Board, I display in Figure 2 the number of externally reviewed proposals along with the number of distinct reviewers per year. Interestingly, the two numbers are approximately the same – one proposal per reviewer! One might argue that the past decade has shown a crossover in number of proposals versus number of distinct reviewers, but it will take another decade to prove this assertion, if true.

Figure 2

The number of distinct reviews for the same time period is shown in Figure 3. Again, not much growth has occurred and there are fluctuations.

Figure 3

A detailed examination of the NSB reports seems to indicate that there is a small trend toward fewer reviews per proposal. Based on these hard data, one cannot conclude that peer review of proposals has significantly increased as a burden over the past decade. Instead, it appears to be a saturated situation. But it still consumes time and is based on proposed research, not performance. I’ve addressed that issue and its effect on innovation elsewhere.

While peer review is firmly ingrained in the American innovation ecosystem, it’s time to understand how we use it and whether it truly is the wisest course of action as we enter the era of global competition.  Now is the time for America to come to terms with peer review, lest our competitors move faster and push our “now” into their “yesterday.” Innovation waits for no one.

No comments:

Post a Comment