Skip to content
Nov 11 / Steve Runge

The Runges are on the Move (Hopefully)

Glenda and I have entered a period of transition as I apply for a position that will finally enable me to teach regularly. I first started this process two years ago based on a growing desire to get into the classroom and the recognition the long-term projects I’ve worked on at Faithlife are nearing completion. My intention all through grad school was primarily to teach with some research, whereas my current position has been a mixture of research, marketing and product management.

Back in October, 2006, Faithlife hired into an experimental position, essentially a two-year post-doc to try and convert my concept of an annotated Greek NT into a product. Thanks to collaboration with Rick Brannan, Eli Evans and others, the result was the Lexham Discourse Greek New Testament and (thanks to Bob Pritchett’s urging) the High Definition New Testament: ESV Edition. Words fail to describe my gratitude for the opportunity to build the teaching resources I had been conceptualizing during my doctoral research, for without Faithlife’s support I seriously doubt they would have come to fruition. The commercial success of the NT discourse databases caught all of us by surprise. Maybe there was a need for more resources like this? The publication of Discourse Grammar of the Greek New Testament in 2008 (digital) and in 2010 (print) confirmed that people were hungry for explanations of features typically attributed to stylistic variation and such. At that point I stopped applying for teaching positions in order to continuing building the resources I envisioned using for teaching Greek and Hebrew exegesis. With the help of Josh Westbury and Kris Lyle, the Lexham Discourse Hebrew Bible project was released in 2012. The next five years were devoted to supporting the creation of databases for Logos Bible Software base packages, writing High Definition Commentaries, and creating video products for Logos Mobile Ed and Faithlife Today.

Now eleven years into my experimental, two-year post-doc, I am more convinced that ever that the remaining projects on my to-do list are best pursued in conjunction with classroom teaching, something that full-time employment at Faithlife can’t support. Plans are in place for me to continue close collaboration with them on datasets, but as a part-time contractor like so many others have done. In the meantime I’ll continue working full-time.

There are also a number of personal factors driving this decision. This year I turned 50, saw our younger daughter graduate from high school and our older daughter continue to flourish in university. Glenda and I are almost empty nesters. I also marked the one-year anniversary of my father’s death and the three-year anniversary of my mother’s, meaning my responsibilities looking after their affairs here in town have ended. I enjoyed the one-month sabbatical that Faithlife grants employees after ten years of service, but long for the longer breaks that are a natural part of campus life. I realized that if I am ever going to join a green-campus faculty and enjoy summer research time and a longer, traditional, academic sabbatical, time is ticking to make the transition. The search for a teaching position that I began two years ago was suspended after discussion with those supervising me. The additional year turned out to be a blessing, allowing me to bury my dad without him needing to move him late in life and my daughter to complete her senior year without a move. But after having the same discussion again with my supervisors, all agreed that it made sense to begin seeking out what’s next for the Runges.

I have slowly been slowly getting the word out that I’m searching for a teaching position, but have been reluctant to broadcast the news. However, in light of the tight job market and my being outside traditional academic networks, it seemed prudent to let folks know. My hope is to find a position that would allow me to teach Greek exegesis and NT exposition along with the opportunity to supervise theses, but I m happy teaching larger survey courses as well. If you know of positions or are interested in starting a dialogue with me about the possibilities, please let me know.

Apr 29 / Steve Runge

Getting ‘By,’ Part 2

In the last post we looked at how prepositions offer a specific representation of an action or state of affairs that might well have been described from some alternative perspective. Each different preposition would view things from a different vantage point, even if only slightly different. We looked at the way by represents the relationship of two objects as though they remain equidistant from one another. This static proximity could be due to both being immobile, to a static distance maintained along a linear object while another moves parallel to it, or by simply zeroing in a specific moment of the action where the two are proximate.

  1. The pen is by the book (static state).
  2. She walked by the river for an hour (static distance with action)
  3. She walked by the lamppost (zeroing in on specific moment of proximity).

A quick look at the Concise Oxford English Dictionary entry for by reveals that you don’t find the literal spatial usage until you come to the fourth sense:

1 through the agency or means of (indicating how something happens).
2 indicating a quantity or amount, or the size of a margin (identifying a parameter, expressing multiplication, especially in dimensions).
3 indicating the end of a time period.
4 near to; beside (past and beyond).
5 during.

Assuming that these entries are ordered according to frequency of usage, we see that three metaphorical usages outrank what linguists would consider to be the core, spatial meaning of the preposition. Frequency is sometimes  a good indicator of what is the most basic meaning, but here we see that the metaphorical uses (i.e., you can’t draw a picture of the object) that have developed have become entrenched enough to seemingly obscure the core spatial meaning. Now let’s see if we can trace these other senses back to the core meaning. I’m going to skip 5 because I can’t think of any examples, but it seems akin to the “walking by the river” example of parallel activity. If you have good examples, please post them in a comment.

3. Indicating the end of a time period

This usage is a natural extension of the spatial one that we looked at in the last post. First you have the shift from maintaining equidistant proximity (‘sat by the lake,’ ‘walked by the lakeside’) to the snapshot of a motion at the point there is equidistance (‘walked by the lamppost’). Next, we shift from motion in space to motion in time, metaphorically expressing time as if it really moved. Just as walking is essentially a unidirectional movement along some line, time is often represented as unidirectional motion too. We use timelines to create chronologies of events, with  the past and the future treated as directions.

Now it is just a hop, skip, and a jump from here to by representing the end of a time period. Instead of looking ahead and seeing the lamppost that you will walk by, you ‘look’ into the future to some reference point (e.g., noon).

  1. I will complete my essay by noon.
  2. I will complete my essay at noon.
  3. I will complete my essay before noon.
  4. I will complete my essay around noon.

Each of these prepositions metaphorically represents the temporal reference point noon as though it is a geographical point, and the passage of time as though it is motion. Each one relies on the same metaphor, with before (example 3) stopping just short of the target and around (example 4) not providing a specific target.

2 indicating a quantity or amount, or the size of a margin

This one represents yet another metaphorical step away from the core metaphor of equidistant proximity, but in a different direction from the last usage. The spatial distance of the equidistance metaphorically represents a basis of comparison between two entities. Instead of just distance or length, the margin can be virtually anything that can be quantified.

  1. The home team outscored the visitors by 48 points.
  2. She beat her best time by three minutes.
  3. They extended their vacation by two days.
  4. I learned to count by 10s today!

As you can see, the same core metaphor is at work under the surface, the difference is simply a shift from a literal distance to something else that is quantifiable.

1. through the agency or means of

Now for the last one, which is actually listed first based on its pervasive usage. It still relies upon equidistant proximity, but the metaphorical focus is upon some consequence or implication brought about by it. By virtue of the proximity, the object of by is represented as the agent or means by which something comes about.

  1. They saved money by not eating at restaurants as much.
  2. She completed the race despite her injury by willpower.
  3. By avoiding fatty foods he was able to lose about ten pounds.
  4. The team was awarded the trophy by the judge.

Although none of these examples rely on literal distance, the proximity of the two entities is presented as bringing about some specific effect or outcome that would not have happened otherwise. Actions/activity are common means (not eating, avoiding fatty foods), whereas entities are more likely to be agents.

At Tyndale House this summer a number of us will be looking more closely at Greek prepositions and how best to represent their range of usage. The traditional strategy has been to understand the senses as unrelated to one another. An alternative approach utilizing Prototype Theory is to identify the core or prototypical meaning of something, and then to explain the other uses as metaphorical extensions that relate to the core metaphor in some way. As languages change and usages drop out, it can become difficult to identify these “family resemblances,” but that should not keep us from trying. I am really looking forward to what will be learned from this conference, and very excited that Cambridge University Press has expressed “keen interest” in publishing the proceedings volume.

In my next post I will finally return to the verses in Ephesians that spurred this short blog series.

Apr 22 / Steve Runge

Getting ‘By’

This summer I will be heading to Tyndale House, Cambridge for another Greek linguistics and biblical studies conference, with this one focusing on prepositions instead of the verb. Since the time Will Ross and I decided to convene this conference, I have been thinking a lot more about these pesky little particles, especially about how they are translated in most English bibles. Most anyone who has taken Greek has encountered some variation of the spatial diagram below that offers a basic way of differentiating them from one another. It spatially represents the meaning of each one as a way of distinguishing it from the others. This works really well for the literal, spatial usages.

The presence of an arrow head on the line implies that the preposition is used to indicate motion, whereas the others are static or stationary representations. Everything seems very straightforward and simple right? Well, it gets a lot more complicated in a hurry once you move outside this literal usage.

A quick survey of the NT epistles and teaching sections of the gospels reveals that a good many (majority?) of the usages involve non-spatial, metaphorical senses. In other words, the writer is using one of the prepositions above with something you can’t draw a picture of (e.g., love, faith, sin). When we encounter these usages that don’t fit the spatial prototype, we typically grab a lexicon to learn what our options are. This reveals a wide array options that seem to bear no relation to the core, spatial meaning. What prompted this post was the use of by to represent three separate Greek prepositions in the course of only four verses! Do these Greek prepositions really overlap that much, or is it a mismatch with English usage, or are we missing the metaphor intended by the writer?

I am not deriding the translators here,  But in order to address this matter, it seemed prudent to begin by taking a closer look at English by. It offers a great opportunity to illustrate how the core meaning of a preposition can be metaphorically extended to related-but-different meanings by changing various parameters.

Metaphorical Representation

Prepositions offer a metaphorical representation of the relationship between two or more entities. Based on the fact that real life situations may be viewed from multiple vantage points or at various stages of a process or activity, writers and speakers face numerous choices about which vantage point or point in time from which to capture their verbal image. In other words, prepositions offer a representation of reality from a specific perspective rather than an objective picture without any other alternatives. This suggests that there is much to be learned from slowing down a bit and thinking more about the writer’s representational choice. Choosing one vantage point implies that potentially many others was not selected. With this as a little background, let’s now take a look at the representational implications of by.

Spatial Meaning of ‘By’

By is one of the static prepositions akin to Greek παρά, meaning that it represents proximity beside or near some other entity as though it is unchanging/static as in, “The pencil is by the book.” The pencil is not moving toward or away from the book, and it is in a specific physical relation to it.

This is not to say that motion cannot be involved, but simply that the proximity of the entity being described is represented as remaining static with respect to the object of the preposition by. Consider the following two examples.

  1. She walked by the river.
  2. She walked by the lamppost.

Both of these sentences are representing a static relationship between two objects, but differences in the nature of the preposition’s object bring about different representations. In example 1 the picture represented is two objects maintaining a static distance from each other based on the woman walking on a path parallel to the river. Theoretically it represents her as not getting any closer or further away, though in reality we know that rivers meander and that paths are not perfectly parallel to other objects. It is a metaphorical representation selected based on the writer’s interest in a certain image or vantage point.

What about example 2? A lamppost is a point whereas a river is more like a line. How can someone walk parallel to a point? Representation, my dear Watson, representation. If we think objectively about what is involved with walking by a single geographical point like a lamppost, we know that this implies several stages: being at a distance from it, approaching it, being alongside it, moving away from it, and then being at a distance from it again (but ostensibly in a different location). Below are five examples of other stages of this journey that the writer might have singled out using different prepositions.

  1. She walked toward the lamppost (but never reached it).
  2. She walked up to the lamppost (but didn’t touch it).
  3. She walked into the lamppost (while texting).
  4. She walked away from the lamppost.
  5. She walked from the lamppost (to some other point).

The writer could have chosen any one of these vantage points, but selected that slice of the action alongside the lamppost as the most suitable representation for her purpose. Such a representation would be appropriate for describing waypoints along a journey, like dots along a line, where the waypoints provide a verbal description of the path.

She walked by the lamppost on her way to the park to meet some friends, then all of them went to a restaurant for lunch.

Alternatively, something may have happened at the lamppost, so the representation was selected to place her in proximity with it when the other event occurred.

She walked by the lamppost just as a worker was setting up a ladder and preparing to change the light bulb.

Finally, it might be selected to indicate that she only passed near it instead of actually touching or entering it.

She walked by the lamppost that she had run into while she was walking and texting the previous day.

He walked by the candy store, resisting the urge to go in and purchase things that would have ruined his successful diet.

This provides a simple (perhaps overly so) introduction to the way prepositions represent the relation of two entities, and the selection of a specific option instead of other alternatives. In the next post we will explore how changing certain parameters can adapt the literal spatial meaning of something into a “metaphorical extension” to specify other kinds of relations, thus avoiding the need to coin a brand new word or expression. These metaphorical extensions all have their starting point in the core spatial meaning, but end up looking different based on the absence of one or more factors present in the core meaning.

Oct 28 / Steve Runge

Getting above the “Sentence Level”

It is strange to hear people talk about discourse grammar as though it is something altogether removed from discourse analysis based on the misconception that it doesn’t move “above the sentence level.” After all, if discourse analysis is really about discussing higher-level features and structures, how can what I do possibly qualify? This sounds like a reasonable criticism at face value, but it overlooks the reality of how many discourse devices actually operate. The sentiment seems to presuppose that there is a class of discourse devices that only operate above the sentence level that I am apparently ignoring, but how many can you name that only operate above the sentence?

Discourse analysis—at least the cognitively-based models like Walter Kintsch’s on reading comprehension—recognizes that understanding the function of the lower-level features is key to understanding their role at the higher-levels. Why? Because there are precious few “discourse-level only” features. Instead the vast majority play double duty, influencing and shaping our comprehension of the text at multiple levels.

Language has been studied and analyzed for centuries. Philosophers, linguists, logicians, and others have accumulated a rich store of knowledge about language. What has emerged, however, is not a uniform, generally accepted theory but a rich picture full of salient details, brilliant insights, ambiguities, and contradictions. Most of this work has focused on analyzing language as an object, rather than on the process of language comprehension or production. The importance of the actual process of language comprehension has not gone unrecognized, for instance, by literary scholars who have understood very well the role that the process of reception plays in the appreciation of a literary work, yet the tools for explicit modeling of comprehension processes have not been available until quite recently.1

Problem of Linearization

The features of discourse grammar on which I have focused play a key role in shaping our cognitive processing of the larger discourse. To begin with, we don’t read texts in massive chunks; we read them one word at a time. Even if we were to skip ahead or “speed read” we are still only reading one word at a time. Nevertheless, somewhere along the line in our comprehension of texts, we convert the single words into more abstract mental representations of the text. The lower-level features are, in reality, also the author’s instructions for organizing and structuring the higher-level representation of the text as a whole. In communication, the writer/speaker faces a significant constraint, referred to as the problem of linearization.

Linearization Small

The Problem of Linearization

Linearization describes the fact that we can only produce one word at a time, one sentence at a time; conversely the reader/hearer can only take in one word at a time, one sentence at a time.2 If the reader does not properly comprehend how the individual words, phrases and clauses relate to one another, miscommunication will inevitably result. Consider the difference that a simple comma makes in the title of Lynne Truss’ bestselling book on English punctuation.3

a) Eats, shoots and leaves.
b) Eats shoots and leaves.

The presence or absence of the comma here is the difference between discharging a weapon after a meal versus a comment about the diet of an herbivore.

Panda1                             Panda2

The presence of the comma in a) provides instructions intended to overcome the linearization problem. The comma in a) constrains the reader to view “shoots” and “leaves” as verbal actions, whereas its omission in b) constrains the same words to be read as direct objects describing “what is eaten.” You see, lower-level features are the keystone to understanding higher-level structures. Failures at the bottom are magnified the higher up one moves in their analysis.

Bottom-Up or Top-Down?

The problem of linearization has important ramifications for how one analyzes discourse. There has been an ongoing debate in biblical studies about which approach to DA is superior: Should it move from the top-down, or from the bottom up? Are they actually mutually exclusive?


Kintsch’s research from the past several decades has demonstrated that our reading of written texts is really a combination of both. His finding are based on empirical research into human cognition and language processing of written texts—not an abstract model of what might be happening when we read or listen to a discourse.


Kintsch has found that our reading of texts involves an iterative and almost simultaneous bottom-up and top-down processing, which he refers to respectively as “construction” and “integration.” As we read, we necessarily process language linearly—one chunk at at time. This is the bottom-up phase which Kintsch calls construction. We construct a mental representation of the text itself as we read, as modeled in the linearization illustration above. But this is not all that is happening.

As the bottom-up construction occurs, there is also a top-down process occurring referred to as integration. The newly forming mental representation of a text doesn’t exist in an isolated silo of our brain. Instead Kintsch has demonstrated that we integrate the new one into our existing, larger mental representation. This integration is not simply with the earlier portion of what we’ve read or even other books we’ve read, but with the sum of our knowledge about the world and how it operates based on our prior learning and experiences. This is a simplification, but gives you an idea of the importance of factoring cognitive processing into a model of discourse analysis.

Kintsch’s description of the integration-phase of language processing has great explanatory power, helping us understand how it is possible for two people to read the same text and come up with quite different conclusions about it. Differences in background knowledge, goals, and presuppositions all play a role in how we process a text. We don’t just read a text, we also integrate it with what we already know.  This also explains why some of you who read this blog post will cheer it while others reject it or find it boring. Our own mental representation of the world—and presuppositions about DA—play a huge role in how we process new texts or communication.

Analyzing Higher-Levels of Discourse

Now lets return to the original question about my focus on sentence-level features versus tackling higher-level analysis. The linearization problem gives us a clue about the significance of lower-level features to higher-level shaping of the discourse. The ugly little truth is this: The higher up one goes in the analysis of discourse, the fewer explicit and unambiguous markers there are to guide the analyst. Instead, higher-level analysis is driven by our mental representation of the discourse, which in turn goes back to things like chunking into developments, information structure, forward-pointing devices, the thematic prioritization of information, and our existing mental representation of the world. These lower-level features impact the higher level as they are integrated into our larger mental representation of the discourse. In this sense, talking about levels of discourse is largely a theoretical abstraction, not necessarily modeling how we actually process texts.

Focusing on the the lower-level features pays dividends as one moves to the higher-levels. Why? Many features that operate at the lower-levels also impact the higher-levels, as alluded to above. In fact, there are very few linguistic features that exclusively operate at higher-levels. Most that come to mind are actually type-setting or orthographic conventions, like paragraphing and titling. These are largely modern conventions.

Lower-level features are able accomplish higher-level functions as they are found clustered together with other features. For example, inferential conjunctions like ουν are rightly viewed as having a higher-level function, but they nevertheless still simply conjoin two clauses. Judgements about joining higher-level units are made by considering the co-occurence with other “boundary features” like those described by Levinsohn in Discourse Features.

So in terms of methodology, Levinsohn and I may begin with sentence-level features, but this is not where things end. The lower-level analysis necessarily serves as a foundation. Think Lombardi; the game is won or lost based on how well we have mastered the fundamentals. If there are problems there, they tend to compound as one moves up in their analysis.

Focusing on lower-level analysis also plays a “quality assurance” role in higher-level analysis of texts. It allows us to double check our work using a top-down reading to ensure we are able to reconcile all lower-level analyses with our higher-level claims. If we can’t, then we need to go back and figure out where we went wrong. Any who have done DA with me at SBTS, DTS, WEST, or Wycliffe Hall-Oxford can attest to this. Just like Bugs Bunny, I can miss that important turn at Albuquerque and need to reconsider lower-level decisions to better account for higher-level phenomenon.

In contrast, those focusing solely on “discourse-level” features are not troubled by implications from discourse grammar. The biblical writer’s lower-level choices that contradict the analyst’s higher-level claims can only serve as a corrective if the analyst has attended to them. Otherwise ignorance is bliss, so to speak.

This explains my preoccupation with “sentence-level” features, like connectives, highlighting, and structuring devices. Call me silly, but it would seem that if one has properly understood how a device operates in simplex context at the lower-levels, then one will be in a much better position to adequately describe its much more complex interaction with other features at the higher-levels of discourse processing, i.e. the integration stage. Again, I am not really sure we can discretely separate higher-level from lower if the same devices operate at both.

Practical Payoff of Discourse Grammar for DA

There is a time for doing DA, but we should not rush on to higher-level analysis at the expense of attention to lower-level details. Why? If the lower-level exegesis is flawed, think about the implications for the higher-level conclusions drawn. I have opted to major on the lower-level in order to avoid preventable mistakes at higher levels.

Those who complain that discourse grammar only considers sentence-level features must have very different presuppositions about DA and how discourse features actually work. Because I am not postulating about broader themes of a book or “zones of turbulence,” I therefore must not be doing DA. But our methodology must account for the reality that most linguistic features contribute at multiple levels of the discourse. In other words, zones of turbulence are best understood as a clustering of discrete, describable, sentence-level discourse features that function in concert to accomplish a higher-level function. Making assertions about higher-level features is one thing; being able to describe how the lower-level features contribute to make the higher-level result is quite another. If there really is a zone of turbulence, then your case will be strengthened by demonstrating the contribution of each component.

There are indeed different levels of analysis, but failures in the lower levels tend to be magnified at the higher-levels. Attention to the lower levels provides a safeguard and corrective to the higher-level analysis.

For those who criticize me for focusing on sentence-level phenomenon, the discussion and diagrams above are a preview of a larger project I will be undertaking in 2015 on moving from discourse grammar to discourse analysis. Discourse Grammar of the GNT was intended to serve as a foundation for later work, not as a manual for analyzing discourse. The newly shipped High Definition Commentary: Romans offers a simplified example of my approach to DA, demonstrating its practical payoff for the pastor or teacher.

  1. Walter Kintsch, Comprehension: A Paradigm for Cognition (Cambridge; New York: Cambridge University Press, 1998), 93.
  2.  Gillian Brown and George Yule, Discourse Analysis, Cambridge Textbooks in Linguistics (Cambridge; New York: Cambridge University Press, 1983), 125.
  3. Lynne Truss, Eats, Shoots, and Leaves: The Zero Tolerance Approach to Punctuation (New York and London: Penguin, 2003).
Oct 21 / Steve Runge

Continuing education in discourse studies

One of the most common questions I receive concerns what kinds of steps one can take to keep digging into discourse grammar after finishing reading my Discourse Grammar of the GNT, Levinsohn’s Discourse Features, and interacting with the annotated Lexham Discourse Greek New Testament. There are two prongs to my response. First, you don’t necessarily need to pay tuition to learn. When I left my MTS program I had a list the length of my arm of things I wanted to dig into more deeply. Post-graduation is the perfect time to do that, plus it helps you maintain the scholarly discipline you developed while doing your degree, assuming you formally studies somewhere.

The second part is that there are so many great resources on the Web to help you learn and grow, you should be able to read for quite a while before you run out of stuff to read. The key things will be developing a reading list that progressively orders the material. For instance, I quickly found I lacked the needed background to read Levinsohn’s Discourse Features; he assumed too much background information which I had not yet learned. So I went back and read Dooley and Levinsohn’s Analyzing Discourse. But I again found this assumed too much background based on the terse style of writing. So I decided I needed to read Lambrecht’s Information Structure and Sentence Form, the hardest book I had ever read. I had no background in cognitive linguistics, so I went back and started reading the earlier works of Wallace Chafe that Lambrecht had built upon. I was finally beginning to get somewhere.

How did I find these “prequel” books? By reading the footnotes and bibliography. If someone is doing productive stuff that builds on someone else’s work, then those other works get added to my to-do list. Reading the primary texts on which others have built also ensure that I really understand the original, and not just its application by someone else. This kind of digging is what led to the publication of my article on Porter’s misuse of contrastive substitution. Checking his primary sources of support revealed the contradictions between his arguments and particularly those of Stephen Wallace.

Another way of gaining direction is asking someone. For me it was Stephen Levinsohn and Randall Buth. The former recommended I read an introduction to linguistic typology. And so on it went. For a really long time. Then, after about three years, I had finally come back around full circle to have enough background to read Discourse Features. It was then and there that I swore before God that I would write a book that could serve was an easier prequel path than I had to take. My colleague, Josh Westbury, has also developed a recommended reading list for developing a general background in linguistics. It is not meant to be read in order, but is simply a compilation.

As far as continuing your studies, be sure to note that you can continue learning without necessarily having to pay tuition. When I finished my MTS I had all sorts of questions I’d flagged for later study, ones I did not have time to pursue while writing my thesis. I began digging into these, and also began working through the required reading list of top Semitics programs I dreamt of enrolling in. It all kept me learning, as well as preparing for doctoral studies. The four years of reading allowed me to catch up financially and get a jump on the doctoral level reading I’d need to do. When I had questions, I’d write to scholars I had met to pick their brains. Most were more than happy to either answer my question, or to direct me to the reading that would fill in the hole in my theoretical framework that my question revealed.

There is no precise path forward, the answer to “How do I proceed?” really depends on the kinds of things you want to learn or to be able to do. If you find something interesting, start digging into their bibliography and citations and read those sources. Keep going until you find yourself circling back on the same cited materials. Then move on to another area. It is hard work, but the great thing about learning is that it can be a self-sustaining endeavor, especially when pursued as part of a larger community. For me, my community consisted initially of my springer spaniel and the interactions with scholars at SBL. The Web, Twitter, FaceBook, Boxer and video conferencing have completely changed the playing field. Hopefully you can find an easier path forward now than when I began in 1999.

Oct 13 / Steve Runge

On eclecticism in linguistics


I received several comments and inquires of the last few months regarding methodology. These questions stem from claims and criticisms that have been rattling around NT studies for several decades. It has to do with the legitimacy of eclectic approaches compared to the pure application of a system-based approach like that of Systemic Functional Linguistics. The general view I’ve heard espoused has come from SFL practitioners. The general assertion is that eclectic approaches are not just dispreferred, but border on illegitimate. While this may be true for those seeking to build a complete description of language within a single system, but most applied linguists in linguistics-proper are not afraid to adopt proven principles from another, theoretically-compatible approach. Thus they utilize a variety of approaches in order to best tackle the problem at hand.

I’ll begin with a quote from the introduction of Dooley and Levinsohn (2001:iii):

First, we intend it to be practical, addressing issues commonly confronted by field linguists. Rather than attempting to apply a rigid theory or survey a variety of approaches, we provide a methodology that has been refined over years of use. Second, although we follow no rigid theory, we aim for more than a “grab-bag” of diverse methodologies by attempting to present the material within a coherent and productive framework. Specifically, we follow a functional and cognitive approach that seems to be a good approximation of how discourse is actually produced and understood.

The key phrase there is best approximating how discourse is actually produced and understood. This is the focus of applied linguists, not theorizing how language as a whole operates.

Where do linguistic theories come from?

This complaint about eclecticism demonstrates a disregard for the historical development of linguistic theory, including SFL. No linguistic theory has developed ex nihilo, nor do they continue to develop in hermetic isolation. Rather we most often find an evolutionary process at work, with each method building on those who go before. Halliday, Lyons and the early functionalists were reacting against the formalist movements of the day. Specifically, Halliday concentrated on tracing the development path, mapping all the different choices that are made in the production of a discourse as a functional system. His basic assumptions were sound about choice and meaning, about cohesion and coherence, but not all of his ideas panned out. Halliday began his theory by incorporating insights from the Prague School’s Functional Sentence Perspective (FSP), such as the notions of Theme/Rheme, given/new. He also worked with Hasan to make significant claims about coherence and cohesion. But not everything went right.

Halliday’s reformulation of the Prague School notions of theme and theme into his systemic theory led to problems. His claim that “what comes first” is theme doesn’t work outside of English, in languages like Greek, Hebrew, or any other highly inflected language. Similarly, his assignment  of given-new to the “tone group” works okay for English, but breaks down quickly in languages that use some other means than intonation to mark such things. For instance, Japanese uses particles to differentiate P1 from P2, not just intonation. So while there is much to praise about Halliday’s overall assumptions and insights into coherence and cohesion, there was a need to better account for what he mucked up in information structure. This is partly why there has been a split (or coup) within the SFL ranks, which I note in section 9.4 of my Discourse Grammar.

Other responses to the Prague School

There were other theorists besides Halliday working the problems left by the Prague School, such as Paul Grice’s  application of logic to pragmatic choice in contrast to Halliday’s socio-cultural focus. Grice developed a very concise summary of pragmatic “rules of engagement” that offered a better account of how such choices are made in language than Halliday could offer, summarized in Grice’s maxims of conversational implicature. Grice’s maxims were then further refined by Stephen C Levinson from the original seven maxims (which I can’t remember) to three: be brief, be relevant, be perspicuous. Neo-Gricean pragmatics could account for the kinds of socio-cultural influence that Halliday had on his to-do list but for which he offered no means of accomplishing. The pragmatics folks were not concerned with Halliday’s broader questions, but instead with solving the one piece about which they were most passionate. They carried on their specialized work on pragmatics, but those who followed behind took the insights from Gricean pragmatics and added these to the insights from Halliday about the systemic functional nature of language (choice implying meaning, etc.) , along with the Halliday and Hassan insights about cohesion and coherence, and carried on their merry way looking into other nagging issues. Recall that all of this was built on the correct insights from the Prague School.

Then along came Sperber and Wilson with their insight that Grice’s maxims could actually be distilled not just from seven to three principles, but to just one: the principle of relevance. Relevance Theory (RT) sought to re-envision the entire language production and comprehension process based on this one principle. In doing so, they demonstrated that the brain is indeed involved in language production, as Chomsky had theorized. But instead of there being some universal grammar hard-wired in our beings, they found that grammar and language were a natural outworking of our cognitive processing. RT was mostly consumed with English like Halliday, so others needed to adapt and redirect the basic insights in order to account for the typological data and patterns found in other languages.

Some focused on the cognitive aspects of language, including Wallace Chafe, Walter Kintsch, Ronald Langacker, and George Lakoff. Their work predates RT, but all were seeking to account for the same kinds of phenomenon. Chafe and Kintsch wanted to understand what we actually did with discourse was we processed it. Chafe found that we didn’t store the words we read or heard, at least not after the first bit of time had passed. Instead these words somehow became transformed into mental pictures or “representations” of what was processed. Kinsch, in his research into reading comprehension—what we really should be reading to develop a DA methodology for written texts in biblical studies!!!!—found that we iterate between a construction of Chafe’s mental representation and integration of what we already had in there, based on our knowledge of the world, previous experience, etc. Lakoff found that words don’t have meanings so much as meanings have words. When I say the word “dog,” what pops into your mind is likely slightly different from what pops into mine, yet there is an agreed-upon, prototypical range of meaning for “dog.” When this range is stretched, we mark this by calling it “dog-like” or something else. They found that most meanings have fuzzy edges, not the black and white boundaries envisioned by Aristotle . This has led to tremendous development in the area of cognitive linguistics and human comprehension. Nevertheless, it still builds on the bits that Halliday and others got right, but seeks to fix what he and others either got wrong or had no interest in accounting for.

Typologically-informed appproaches

Meanwhile, back at the ranch, there were others who sought to apply the Prague School insights—like Halliday—but outside of English. Simon Dik began looking at minority languages to determine whether theme and rheme were indeed universal across languages. He found they largely were, but Halliday’s account based on English was too idiosyncratic to scale out to non-western, languages. This led to the rise of (Functional) Discourse Grammar (FDG), a typological updating of the Prague School. It was Dik who theorized there were two functional reasons for marked word order represented in preverbal slots P2 and P1 (what I have called emphasis and frames of reference, respectively). They also worked through other issues overlapping with Halliday, but somewhat in isolation from what was going on elsewhere regarding cognitive approaches.

Chafe too was interested in information structure, specifically its intersection with cognitive processing. This led to the development of Construction Grammar, describing set linguistic patterns and functions as “constructions.” Chafe’s greatest disciple, Knud Lambrecht, pulled the pieces together from the Prague School, cognitive linguistics and mental representations, Gricean pragmatics, and the given-new distinction into a coherent account of information structure that offered a satisfying account from the shaping and production of utterances all the way through to the cognitive processing and storage by the hear/reader. Lambrecht’s model has subsequently been adopted almost wholesale in RT, Role and Reference Grammar, and elsewhere. Kintsch’s work in reading comprehension offered independent corroboration of many of Lambrecht’s ideas.

Objectives shape the theory

The differences we find among linguistic methodologies do not stem from one being right and another wrong; they stem from the differing questions that each theory is attempting to answer. SFL focuses among other things on the  socio-cultural factors which shape language use, but not the cognitive processing. As they have need to develop this other area, SFL can either reinvent the wheel or borrow from another theory that has already done the hard work of development. We see happening if we track the other functionalist who came along shortly after Halliday.

Folks like Foley and Van Valin asked a brand new question, partly in response to Halliday’s English-centric problems. What would a typology of language look like if it had not begun with a Western European language like English or German? Role and Reference Grammar (RRG) thus works toward a unified framework that can account for the kinds of things that Halliday first postulated, but without the predisposition to English. RT seeks to do the same thing, but in terms of the basic presupposition of relevance. Construction Grammar (ConstG) seeks to do the same thing. How? By taking the pieces that work from the other approaches, and filling in the the gaps or errors in light of their approach’s specific goals and objectives. They all begin with the formalist approaches of the Prague School that needed improvement, then build on Halliday’s understanding of cohesion and coherence, then build on the work of Chafe and Lambrecht for information structure and mental representations, which in turn all build on Gricean pragmatics and the basic assumption of relevance from RT. All have common roots and common presuppositions, but unique objectives. To the extent that their basic functional presuppositions agree, each can utilize the work of the others to improve the whole while accomplishing their unique objectives.

Consequences of purist theoretical approaches

So what is the problem of eschewing eclecticism to maintain a pure theoretical framework like SFL? Nothing, really. It is a valid choice, but is not without its drawbacks. To begin with, Halliday’s school of SFL has done little to incorporate the critical insights from cognitive linguistics in terms of mental representations, relevance and information structure. Some have maintained their purity to their own detriment as the refuse to incorporate new insights from other approaches. This isolation has also led to a revolt of sorts. The Cardiff school of SFL has broken with Halliday in Sydney on certain issues based upon the latter’s resistance to updating ideas in light of the revolutionary insights.

The purity of SFL within NT studies

Let us return more specifically to the application of SFL to NT studies by Porter. His claim of maintaining theoretical purity stands at odds with his incorporation of concepts from outside SFL. I have found no evidence of contrastive substitution being adopted and reworked within SFL (critiqued here), nor of Porter’s symmetrical approach to markedness theory (critique forthcoming). Both have been borrowed, which sounds like the kind of eclecticism about which he complains. I applaud him for using things that work; every applied linguist worth his salt does the same. My problem lies in Porter claiming to be operating purely within SFL when in fact foundational pieces of his theoretical framework are borrowed. The same could be said of Porter’s incorporation of frequency analysis; I have found no such approach evidenced within Halliday’s SFL. One should not play both sides of the fence.

Advantages of eclecticism

This has been a lousy attempt at narrating the history of functional linguistics, but my point is simple. If you trace any modern theory of language (SFL, RT, RRG, FDG, CogG, ConstG) back to its origins, you will inevitably find  evolutionary eclecticism along the way. Everyone has built on the bits that others have gotten right. The exceptions to this are the theorists, whose approaches necessarily demand that they reinvent the wheel in order assimilate X into their worldview of language. Most everyone else is eclectic to one extent or the other.

Adopting a purist approach necessitates not incorporating proven insights from another theory until it has been assimilated within ones own. Levinsohn and I are too practically motivated to do this. We are applied linguists, not theoretical ones, unless there is a need to do so. If you look at the table of contents from Dooley and Levinsohn’s Analyzing Discourse,  you will find this illustrated. They begin with basic insights from Leech that the number of speakers involved in the production of a discourse impacts the kind of discourse that results. This is then followed up by Longacre’s insights from his Grammar of Discourse into how agent orientation and sequentiality meaningfully explain how different genre’s come about, and why features in one genre might operate differently in another. They then move to Halliday’s insights from SFL into ideolect, style and register, adding another layer of complexity to our understanding of language. Then they shift from the complexities to the specific factors that hold a discourse together. D&L’s discussion of coherence and cohesion begins with Halliday & Hasan (H&H) because they nailed the basic concepts within SFL, but they didn’t provide the practical tools for working out these ideas at the sentence level.  This is where D&L shift to other approaches that do address these practical questions. This is where insights about cognitive processing from Chafe (CogG) on chunking—and Lambrecht (ConstG) on information structure—enter the picture, along with Levinsohn’s MA work on participant reference in Inga.

Levinsohn and I are indeed thoroughly eclectic, but the important point to recognize is that most everyone else doing applied work is as well, based on the evolutionary development of linguistic theory. We still would have been eclectic even if we were only using Construction Grammar or RT. Each has built upon the other.

Disdavantage of purist theoretical approaches

Being a true methodological purist has its drawbacks for the end user. It would have required that I not use proven insights from another approach or field such as cognitive poetics, which is rethinking poetics and literary analysis in light of insights into cognitive processing of language. I would only be able to use such insights after they had been assimilated into my theory.

More importantly for my task at hand, being a non-ecclectic purist would have necessitated that my Discourse Grammar readers learn my method’s jargon and idiosyncrasies rather than the accessible description that eclecticism has offered in the published version. If you wonder why it is easier to understand that some other works in our guild, my eclecticism is partly to blame. The purist approach typically complicates things much more than eclecticism based on the methodological constraints of the theory. Theoretical approaches are generally less applicable than applied ones since the theory constrains the approach rather than the data.

Goals drive our method

The real question to ask is this: What exactly is our goal for linguistic analysis in biblical studies? Is it to develop a unified theory of language, or to functionally describe the features and their contribution to the discourse? Both are legitimate, but I fear the two have been conflated as though the latter necessarily presupposes the former. There is nothing wrong with Porter developing a theory of language to account for the NT corpus, but there are plenty of existing theories besides SFL that may readily and legitimately be applied.

A purist application of Halliday would not necessarily lead to understanding the structure and flow of Romans. Instead it would mean developing a theory of language that could account for  the linguistic artifact we find in the NT book of Romans. Brown and Yule would also likely have a different objective for DA than most NT scholars would desire, since their focus is directed to spoken discourse rather than written. In my view, Walter Kintsch’s work on reading comprehension is really where we should be focusing, as it is thoroughly up to date in terms of cognitive processing, and it is specifically focused on the comprehension of written discourse. Spoiler alert: this is the direction I will be heading in 2015 in my volume on moving from discourse grammar to discourse analysis.

One final point: If I wanted to be a purist, I could have reformulated most all of my descriptions of discourse features under the auspices of RT, ConstG, CogG, RRG, and FDG, so long as I adapted to their idiosyncrasies. All are in basic agreement in terms of fundamental presuppositions, which is why I could operate within any one. However, the differing objectives, like RRG imagining a non-Western typology, necessarily lead to differences in the approaches. So eclecticism is not as reckless or strange as Porter has portrayed it, nor is methodological purity the only legitimate linguistic option. The reality is there is no perfect framework; each is focused on a different part of the same overall puzzle.



I trust that Dr. Porter is well aware of the principles and history that I describe above. I believe that he has framed the eclectic vs. pure theory issue in this way as a rhetorical advantage. Since 1989, he has been writing for an audience within our guild that is largely ignorant of the broader field of linguistics. People have taken his claims at face value assuming they accurately reflect the broader discipline outside NT studies. Those of us who have read more widely in linguistics proper recognize the advantage he has gained in framing things this way. The point of this post is to offer an alternative view on the matter.

If Porter wants to invest his energy in theorizing about the language of the GNT, there is nothing wrong with this. My calling is to provide something that has a more practical payoff for the guild. Denigrating eclectic linguistic approaches as illegitimate demonstrates an ignorance of how modern linguistic theory, especially functional theory, has developed. Keep that in mind as you hear this argument put forward in San Diego next month.


Apr 15 / Steve Runge

On light and darkness

John 1:4–5: “In him was life, and the life was the light of humanity. And the light shines in the darkness, and the darkness did not overcome it.”

This is a very familiar text, one which any Christian would readily affirm. This is indeed what Jesus has accomplished through His death and resurrection from the dead. Nevertheless, we see mature believers lose sight of this truth. Why? How? Well, there seems to be a competing reality, one which is false yet still carries significant weight at times. This alternate reality might simply be termed “darkness.”

What do I mean by darkness? It can be circumstances that just appear insurmountable, as though there is simply no hope for any kind of meaningful change or respite. It might take the form of unbearable disappointment, having hopes and dreams crushed into oblivion where there is seemingly no possibility of anything good could come of it. It can stem from the shame and regret associated with sin, when the desire to change is countered by the humiliation and pain that restitution and reconciliation seem to require. In each case, one is led to believe that living in the darkness is the only viable option.

Jesus’ incarnation shined a light in the darkness, but it did not make the darkness go away. He has overcome the darkness, but this doesn’t mean that it no longer has any power.   Jesus has set us free from the power of sin and darkness, but we still face the challenge of turning away from both in order to follow Him. Sin and darkness only have the power we give them. As we choose not to set our mind on. Here’s how Paul phrases it in Romans 8:5-13:
For those who are living according to the flesh are intent on the things of the flesh, but those who are living according to the Spirit are intent on the things of the Spirit. For the mindset of the flesh is death, but the mindset of the Spirit is life and peace, because the mindset of the flesh is enmity toward God, for it is not subjected to the law of God, for it is not able to do soand those who are in the flesh are not able to please God. But you are not in the flesh but in the Spirit, if indeed the Spirit of God lives in you. But if anyone does not have the Spirit of Christ, this person does not belong to him10 But if Christ is in you, the body is dead because of sin, but the Spirit is life because of righteousness. 11 And if the Spirit of the one who raised Jesus from the dead lives in you, the one who raised Christ Jesus from the dead will also make alive your mortal bodies through his Spirit who lives in you. 12 So then, brothers, we are obligated not to the flesh, to live according to the flesh. 13 For if you live according to the flesh, you are going to die, but if by the Spirit you put to death the deeds of the body, you will live. (LEB)
Where we chose to set our mind is a matter of life and death. The mind set on the flesh has only one outcome: death. I do not believe Paul here only pictures indulgent, lust-filled living. Rather Paul repeatedly talks about the need for having our minds renewed, the need to fix our focus on thing above instead of things around us (Rom 12:2; Gal 5;16–18; Phil 4:4–9). As we choose not to rejoice, not to meditate on what is true and honorable and pure, to set on mind on the flesh rather than the spirit, figuratively speaking we are turning our back on the light and returning to darkness.
Jesus is indeed the light of the world, and we will celebrate this on Easter Sunday. But we cannot forget the reality of the darkness that remains. Jesus has set us free from the power of sin and darkness, but we still have the option of giving both power they should no longer have. Darkness is still darkness, and it still leads to death. If our mind is set on our perspective of our circumstances, on our perspective of disappointment, on our inability to see a possible way forward, we are exchanging darkness for light.
On Saturday afternoon I received word a friend, a pastor with whom I’d served through some rough patches in ministry, took his own life. I do not know what he was thinking or why he did it, nor will I likely ever know. I do not believe there was some great sin lurking in his closet. But I can’t help but think that somewhere along the way he allowed darkness to take the place of light in small ways. His decision to end his life may have come quickly, but the underlying causes that led him to this decision were most likely a slow progression. I know this is a bit out of context, but I don’t think Jesus would mind: “Therefore if the light in you is darkness, how great is the darkness!” (Matt 6:23). Unthinkable decisions begin to look like viable options in the light of darkness, especially where the light really is darkness.
I wish what happened last week could be called an anomaly, but I have seen it repeated. In fact the guy who led me to Christ in 1985, who went on to serve as a pastor, also ended his life some years back. While some around me reacted angrily at these decisions, I found it hard not to be empathetic. Twice I have had medical crises disable me from working, where the combination of medical bills and no income brought on a darkness so thick it seemed impenetrable. I have had hopes crushed near the conclusion of a long path of hard work that made it all seem in vain. I have been ashamed by the consequences of sin, left wondering if there is sufficient grace and love to possibly rebuild what my choices had destroyed. All of these were the darkest of moments in my life. As I isolated myself, as I sought to find a way forward by my own understanding, I found the darkness overwhelming me. Why? Because the decisions to isolate and depend on myself in reality were decisions to turn away from the Light of life.
When we allow circumstances and situations to rule our lives and shape our perceptions of things, we are not walking in the light. “If the light in you is darkness, how great is the darkness?” Very great indeed. If we stand alone, we will fall alone. 1 John 1:5–7 offers a better way forward:
“And this is the message which we have heard from him and announce to you, that God is light and there is no darkness in him at all. If we say that we have fellowship with him and walk in the darkness, we lie and do not practice the truth. But if we walk in the light as he is in the light, we have fellowship with one another, and the blood of Jesus his Son cleanses us from all sin.”
The darkness has already wrought enough havoc. If there are areas of darkness you have made peace with instead of turning away, its time to put an end to it. If circumstances are too hopeless to bear, then stop trying to bear them alone. I cannot offer you promises of quick fixes without any consequences. All I can do is give testimony of God’s faithful shepherding of me out of these dark patches back into the light. Yes, they were patches that did not stretch to infinity and beyond, despite feelings to the contrary at the time. This was accomplished in large part through the ministry of other believers in my life, not alone.
Let someone in. Ask for help. Do not give the darkness power it no longer should have.
Jan 30 / Steve Runge

Summer internships in Greek Discourse Grammar

I have gained approval to try something that I have wanted to do for years: offering summer internships at Logos. Many have asked about opportunities to study with me, and this as good as it will get. I’ll mentor interns in their synthesis of discourse features into a unified reading of a NT book or portion of a book. Publication of a discourse handbook would be the end result of the research.

I have had the privilege of doing some intensive teaching as a visiting professor, as well as mentoring a few poor souls at a distance. Although this has served its purpose, there is nothing like intensive collaboration for gaining applied knowledge. There is also nothing like comprehensive application to shake your theoretical framework to the core and identify areas that need attention. A summer internship is the best means I could think of to make this possible.

If you want to learn how to analyze a book, how to shape ideas into a research proposal, how to weigh the impact of one feature against another, there will be no better opportunity than bringing all your knowledge to bear in this summer internship. The research from the summer should lay much of the groundwork needed for outlining a doctoral proposal. It would also form the basis for ongoing mentoring at a distance through your dissertation.

While this internship offers the opportunity to develop your research skills and theoretical framework, it is primarily about writing up what you have found. In fact, your ability to clearly and succinctly describe the features of the text is of the utmost importance. The internship is a writing gig; research skills and other benefits are simply a natural consequence of the analysis and writing. This means you need more writing experience than exegetical papers for school, or even technical articles.

If you have read my work, you know I am passionate about making things accessible to non-specialists. As a discourse intern the same would be expected of you, and you’ll learn new ways to do it. This means blogging  about grammar or language is likely the best preparation, besides having mastered the discourse grammar material. Below is the text that will appear in the ad at Logos, just not exactly sure when:


Greek Discourse Grammar Internships

Logos Bible Software is seeking highly qualified candidates for Greek Discourse Grammar Internships this summer.  Successful candidates will have mastered the concepts described in Discourse Grammar of the Greek New Testament, and will assist in the development of exegetical handbooks which help pastors and students better understand the exegetical implications of discourse features annotated in the Lexham Discourse Greek New Testament.  Interns will work directly with Dr. Steven E. Runge as part of the Logos Discourse Team, providing an unparalleled opportunity to develop the skills and theoretical framework needed for advanced research in the field of NT discourse analysis and discourse grammar.


  • Describe how the various discourse features annotated in the Lexham Discourse Greek New Testament contribute to the overall flow of a NT writer’s message using prose accessible to non-specialists. Your specific project will be determined in consultation with Dr. Runge. Writing skills are as important as knowledge of discourse grammar.
  • Ability to work as part of a collaborative team.


  • Summer relocation to Bellingham (non-negotiable)
  • Ability to synthesize the exegetical implications of a writer’s choice to use various discourse features, and to describe their contribution to the overall flow of the discourse.
  • Ability to succinctly and accessibly describe technical linguistic features for readers with a traditional background in Greek.
  • Two Years of Greek, completed MA/MDiv (or equivalent) preferred.
  • Has mastered the concepts described in Discourse Grammar of the Greek New Testament.

The ideal candidate

Application Process

Please submit a CV, a letter of interest describing your background in discourse grammar, and a writing sample (or hyperlink to specific blog posts) to Applications are due by March 15, 2014. For more information about Logos, see


Why bother coming all the way to beautiful Bellingham during the very best season of the year? Here are a few reasons:

  1. Learning while being paid a modest wage (with a modest relocation allowance) instead of paying tuition.
  2. Opportunity to practically and intensively apply a tested theoretical framework.
  3. The chance to formulate a research proposal for a future dissertation project, and to develop the working relationship needed for ongoing mentoring in your research.
  4. A publication credit.


Aug 24 / Steve Runge

Whence a tenseless Greek Indicative?

Within NT studies the notion that the Greek verb lacks tense/temporal reference has become fairly accepted. If we compare this tenseless view of Greek with what has been claimed by every linguist and grammarian Porter cites in his research, you might scratch your head a bit. Why? Not one of them argues that Greek lacks tense. The linguists like Lyons, Comrie, Wallace and Haspelmath treat Greek as a mixed system with tense, aspect and mood all present in the indicative.

If it is true that the broader field of linguistics has treated Greek as having both tense and aspect, where did the “tenseless” idea come from? Why did it come about? I do not really know the whole story, but it seems that Porter was seeking to account for the incorrect claims of mainly commentators–not grammarians1–some of whom treated Greek verbs as though they had absolute temporal reference. It also seems that he was seeking to set his work apart from what was becoming a crowded field, claiming something no one else had claimed before, viz. that the Greek indicative lacked any temporal semantics.

But again, why was there a need to claim a total lack of temporal reference? In my last post I highlighted the areas of significant consensus between Porter and myself. However, Porter added two significant claims to his dissertation, claims not found either in biblical studies or in linguistics: a tenseless view of the verb and a semantic weighting/prominence view of the verb. Both of these proposals lack support or motivation from the field of linguistics. They are essentially rhetorical inventions originating from Porter’s dissertation.

The basic premise of the timeless view is to not just argue against the presence of absolute time/tense in the verb in favor of aspect. Rather it completely rejects the notion that the Greek verb conveys any temporal semantics in the indicative. The most compelling data for this is the multivariate use of the Present, attested by the statistics gathered in Decker’s Temporal Deixis of the Greek Verb in the Gospel of Mark with Reference to Verbal Aspect. Note the almost equal distribution of the present tense-form in past, present and future temporal contexts, not to mention the timeless/atemporal uses, cited from my HP article (p. 215):

Table 1

“The Verbal Aspect of the Historical Present Indicative in Narrative,” p. 215

As you can see, the Present tense-form shows the most damning distribution when it comes to the traditional understanding of it referring to present time.

If you have read much of my work, you will have heard me harp on the importance of one’s theoretical framework. This lesson was thankfully beaten into my head by Larry Perkins, Stephen Levinsohn and Christo Van der Merwe; it has saved me from ruin on a number of occasions, and held the key to unlocking sticky problems. The strange distribution of the Present indicative is one of them.

There are two widely accepted principles that were ignored by Porter and those who have adopted his model. The first is the fact that most every Indo-European language–which would includes English, Greek, German, etc–has what linguists call a past/non-past distinction, rather than the past/present/future distinction presupposed by Porter. This means that the Present tense-form in these languages doesn’t exclusively refer to the present, but rather more broadly to the non-past.

For example, I could say “I am eating dinner with Bob [Monday]” and have either a present or a future meaning depending on the presence or absence of the adverb “Monday.” So too with Greek. This means that the “futuristic presents” are not anomalous, but are behaving like  a good Indo-European language would be expected to behave. The failure to incorporate a past/non-past principle into his framework led Porter and those who have followed him to misconstrue the data.

The second principle missing from his framework was treating the historical present as a pragmatic usage rather than as prototypical. The numbers above treat the past use of the Present as though this is part of its basic semantic meaning, rather than as a pragmatic highlighting device based on the mismatch of tense and aspect to the narrative context.You’ll need to read the paper for the full argument.

Recognizing the past/non-past distinction, and treating the historical present as a pragmatic device–just as both traditional grammarians and linguists have done for decades–changes what originally seemed like a mess into something quite a bit tidier. Here is the updated table from my article.

  • The HP usage is excluded, based on it not representing the basic semantics of the Present.
  • The future reference is part of the core semantics, based on the non-past reference.
  • The temporally undefined data tells us nothing about the temporal reference. It says the form was chosen based on the aspect that it conveyed in a timeless/atemporal context. It should thus either be included in the core usage, or excluded as not telling us anything about temporal reference.
Table 2

“The Verbal Aspect of the Historical Present Indicative in Narrative,” p. 216

In either case, the Present indicative forms in Mark render a 99% consistency in usage with what would be expected of an Indo-European tense form.

So had the widely-accepted linguistic notions of Greek having a past/non-past temporal distinction and of the historical present being a pragmatic usage been incorporated into Porter’s theoretical framework, there would have been no basis for making a tenseless/timeless argument in Greek. There likely would not have been much of a Porter/Fanning debate, and our field would not have squandered the last 20 years arguing about a linguistically unsound proposal.

One’s presuppositions play a huge role in determining outcomes. Beware.

  1. See Mike Aubrey’s comments here and here. The older grammarians were not as wrong as Porter has made them out to be. The key is not to judge them anachronistically
Aug 17 / Steve Runge

Aspect: Areas of Agreement

A great point was raised on Facebook yesterday that deserves a bit more consideration. The comment came from a NT scholar looking in as an outsider to the debate about the Greek verb. There seems to be this notion that if one rejects the idea of a timeless verb in Greek, then one must also reject the idea that it conveys verbal aspect. Thus, to do so dooms one to become re-enslaved to viewing the verb as conveying absolute tense and Aktionsart, the very things Frank Stagg fought against. DOOM, DOOM!

Well folks, I am here to tell you that this is not really the case. Regardless of the hype, there is actually quite a bit more consensus on these issues than you might think. If you like Porter’s taxonomy then we have something in common; I like it too. Here is what I mean:

  1. Greek tense-forms convey perfective, imperfective, or a third kind of tense/aspect.
  2. The aspects are present in every mood, whereas tense (“spatial proximity/remoteness” for you timeless folks) is only found in the indicative mood.
  3. The aorist conveys perfective aspect, the present and imperfect convey imperfective aspect, and the perfect and pluperfect convey a third thing. Porter calls it stative aspect, which I can live with.1

On these issues I have sided with Porter’s taxonomy, both on the web and in print. Other than the quibbling over what to call the perfect, there is a high degree of consensus regarding perfective and imperfective aspect, and how the Greek tense-forms align with them.

So how did people get the impression that if you reject some portion of Porter’s framework that you are rejecting it all? Well, he has framed it as an all-or-nothing proposition. He cast things as though standing in opposition to his ideas is to argue in favor of a “once for all time” aorist and so on. This is rhetorical scare tactic, but it has proven surprisingly effective. There does not seem to be another viable option available.

There is widespread consensus on these issues, save what to do with the perfect.2 Had Porter stopped here, there quite likely never would have been a Porter-Fanning Debate. Rather, it would have been something more like a Porter-Fanning Report on Aspect, and the field would have quietly continued working out the remaining issues of the next twenty years. However, this was not the case.

Instead of the field being able to move forward with a basic consensus about the Greek verb conveying a combination of tense and aspect in the indicative, and aspect-only in the non-indicative, we have had twenty-plus years of arguments based on two proposals put forward by Porter: a tenseless view of the verb and a semantic weighting/prominence view of the verb. These will be covered future posts.

  1. Campbell considers the perfect another kind of imperfective aspect, which is half right. Fanning calls it a combination of things, which is understandable. No worries, the linguistic field itself does not yet have a consensus about the perfect, but it is getting closer.
  2. And I think that we will find consensus in November that Campbell’s “imperfective Perfect” is wrong.