Skip to content
Oct 28 / Steve Runge

Getting above the “Sentence Level”

It is strange to hear people talk about discourse grammar as though it is something altogether removed from discourse analysis based on the misconception that it doesn’t move “above the sentence level.” After all, if discourse analysis is really about discussing higher-level features and structures, how can what I do possibly qualify? This sounds like a reasonable criticism at face value, but it overlooks the reality of how many discourse devices actually operate. The sentiment seems to presuppose that there is a class of discourse devices that only operate above the sentence level that I am apparently ignoring, but how many can you name that only operate above the sentence?

Discourse analysis—at least the cognitively-based models like Walter Kintsch’s on reading comprehension—recognizes that understanding the function of the lower-level features is key to understanding their role at the higher-levels. Why? Because there are precious few “discourse-level only” features. Instead the vast majority play double duty, influencing and shaping our comprehension of the text at multiple levels.

Language has been studied and analyzed for centuries. Philosophers, linguists, logicians, and others have accumulated a rich store of knowledge about language. What has emerged, however, is not a uniform, generally accepted theory but a rich picture full of salient details, brilliant insights, ambiguities, and contradictions. Most of this work has focused on analyzing language as an object, rather than on the process of language comprehension or production. The importance of the actual process of language comprehension has not gone unrecognized, for instance, by literary scholars who have understood very well the role that the process of reception plays in the appreciation of a literary work, yet the tools for explicit modeling of comprehension processes have not been available until quite recently.1

Problem of Linearization

The features of discourse grammar on which I have focused play a key role in shaping our cognitive processing of the larger discourse. To begin with, we don’t read texts in massive chunks; we read them one word at a time. Even if we were to skip ahead or “speed read” we are still only reading one word at a time. Nevertheless, somewhere along the line in our comprehension of texts, we convert the single words into more abstract mental representations of the text. The lower-level features are, in reality, also the author’s instructions for organizing and structuring the higher-level representation of the text as a whole. In communication, the writer/speaker faces a significant constraint, referred to as the problem of linearization.

Linearization Small

The Problem of Linearization

Linearization describes the fact that we can only produce one word at a time, one sentence at a time; conversely the reader/hearer can only take in one word at a time, one sentence at a time.[2. Gillian Brown and George Yule, Discourse Analysis, Cambridge Textbooks in Linguistics (Cambridge; New York: Cambridge University Press, 1983), 125.] If the reader does not properly comprehend how the individual words, phrases and clauses relate to one another, miscommunication will inevitably result. Consider the difference that a simple comma makes in the title of Lynne Truss’ bestselling book on English punctuation.2

a) Eats, shoots and leaves.
b) Eats shoots and leaves.

The presence or absence of the comma here is the difference between discharging a weapon after a meal versus a comment about the diet of an herbivore.

Panda1                             Panda2

The presence of the comma in a) provides instructions intended to overcome the linearization problem. The comma in a) constrains the reader to view “shoots” and “leaves” as verbal actions, whereas its omission in b) constrains the same words to be read as direct objects describing “what is eaten.” You see, lower-level features are the keystone to understanding higher-level structures. Failures at the bottom are magnified the higher up one moves in their analysis.

Bottom-Up or Top-Down?

The problem of linearization has important ramifications for how one analyzes discourse. There has been an ongoing debate in biblical studies about which approach to DA is superior: Should it move from the top-down, or from the bottom up? Are they actually mutually exclusive?

Top-Down
Top-Down.

Kintsch’s research from the past several decades has demonstrated that our reading of written texts is really a combination of both. His finding are based on empirical research into human cognition and language processing of written texts—not an abstract model of what might be happening when we read or listen to a discourse.

Const-Integ

Kintsch has found that our reading of texts involves an iterative and almost simultaneous bottom-up and top-down processing, which he refers to respectively as “construction” and “integration.” As we read, we necessarily process language linearly—one chunk at at time. This is the bottom-up phase which Kintsch calls construction. We construct a mental representation of the text itself as we read, as modeled in the linearization illustration above. But this is not all that is happening.

As the bottom-up construction occurs, there is also a top-down process occurring referred to as integration. The newly forming mental representation of a text doesn’t exist in an isolated silo of our brain. Instead Kintsch has demonstrated that we integrate the new one into our existing, larger mental representation. This integration is not simply with the earlier portion of what we’ve read or even other books we’ve read, but with the sum of our knowledge about the world and how it operates based on our prior learning and experiences. This is a simplification, but gives you an idea of the importance of factoring cognitive processing into a model of discourse analysis.

Kintsch’s description of the integration-phase of language processing has great explanatory power, helping us understand how it is possible for two people to read the same text and come up with quite different conclusions about it. Differences in background knowledge, goals, and presuppositions all play a role in how we process a text. We don’t just read a text, we also integrate it with what we already know.  This also explains why some of you who read this blog post will cheer it while others reject it or find it boring. Our own mental representation of the world—and presuppositions about DA—play a huge role in how we process new texts or communication.

Analyzing Higher-Levels of Discourse

Now lets return to the original question about my focus on sentence-level features versus tackling higher-level analysis. The linearization problem gives us a clue about the significance of lower-level features to higher-level shaping of the discourse. The ugly little truth is this: The higher up one goes in the analysis of discourse, the fewer explicit and unambiguous markers there are to guide the analyst. Instead, higher-level analysis is driven by our mental representation of the discourse, which in turn goes back to things like chunking into developments, information structure, forward-pointing devices, the thematic prioritization of information, and our existing mental representation of the world. These lower-level features impact the higher level as they are integrated into our larger mental representation of the discourse. In this sense, talking about levels of discourse is largely a theoretical abstraction, not necessarily modeling how we actually process texts.

Focusing on the the lower-level features pays dividends as one moves to the higher-levels. Why? Many features that operate at the lower-levels also impact the higher-levels, as alluded to above. In fact, there are very few linguistic features that exclusively operate at higher-levels. Most that come to mind are actually type-setting or orthographic conventions, like paragraphing and titling. These are largely modern conventions.

Lower-level features are able accomplish higher-level functions as they are found clustered together with other features. For example, inferential conjunctions like ουν are rightly viewed as having a higher-level function, but they nevertheless still simply conjoin two clauses. Judgements about joining higher-level units are made by considering the co-occurence with other “boundary features” like those described by Levinsohn in Discourse Features.

So in terms of methodology, Levinsohn and I may begin with sentence-level features, but this is not where things end. The lower-level analysis necessarily serves as a foundation. Think Lombardi; the game is won or lost based on how well we have mastered the fundamentals. If there are problems there, they tend to compound as one moves up in their analysis.

Focusing on lower-level analysis also plays a “quality assurance” role in higher-level analysis of texts. It allows us to double check our work using a top-down reading to ensure we are able to reconcile all lower-level analyses with our higher-level claims. If we can’t, then we need to go back and figure out where we went wrong. Any who have done DA with me at SBTS, DTS, WEST, or Wycliffe Hall-Oxford can attest to this. Just like Bugs Bunny, I can miss that important turn at Albuquerque and need to reconsider lower-level decisions to better account for higher-level phenomenon.

In contrast, those focusing solely on “discourse-level” features are not troubled by implications from discourse grammar. The biblical writer’s lower-level choices that contradict the analyst’s higher-level claims can only serve as a corrective if the analyst has attended to them. Otherwise ignorance is bliss, so to speak.

This explains my preoccupation with “sentence-level” features, like connectives, highlighting, and structuring devices. Call me silly, but it would seem that if one has properly understood how a device operates in simplex context at the lower-levels, then one will be in a much better position to adequately describe its much more complex interaction with other features at the higher-levels of discourse processing, i.e. the integration stage. Again, I am not really sure we can discretely separate higher-level from lower if the same devices operate at both.

Practical Payoff of Discourse Grammar for DA

There is a time for doing DA, but we should not rush on to higher-level analysis at the expense of attention to lower-level details. Why? If the lower-level exegesis is flawed, think about the implications for the higher-level conclusions drawn. I have opted to major on the lower-level in order to avoid preventable mistakes at higher levels.

Those who complain that discourse grammar only considers sentence-level features must have very different presuppositions about DA and how discourse features actually work. Because I am not postulating about broader themes of a book or “zones of turbulence,” I therefore must not be doing DA. But our methodology must account for the reality that most linguistic features contribute at multiple levels of the discourse. In other words, zones of turbulence are best understood as a clustering of discrete, describable, sentence-level discourse features that function in concert to accomplish a higher-level function. Making assertions about higher-level features is one thing; being able to describe how the lower-level features contribute to make the higher-level result is quite another. If there really is a zone of turbulence, then your case will be strengthened by demonstrating the contribution of each component.

There are indeed different levels of analysis, but failures in the lower levels tend to be magnified at the higher-levels. Attention to the lower levels provides a safeguard and corrective to the higher-level analysis.

For those who criticize me for focusing on sentence-level phenomenon, the discussion and diagrams above are a preview of a larger project I will be undertaking in 2015 on moving from discourse grammar to discourse analysis. Discourse Grammar of the GNT was intended to serve as a foundation for later work, not as a manual for analyzing discourse. The newly shipped High Definition Commentary: Romans offers a simplified example of my approach to DA, demonstrating its practical payoff for the pastor or teacher.

  1. Walter Kintsch, Comprehension: A Paradigm for Cognition (Cambridge; New York: Cambridge University Press, 1998), 93.
  2. Lynne Truss, Eats, Shoots, and Leaves: The Zero Tolerance Approach to Punctuation (New York and London: Penguin, 2003).
Oct 21 / Steve Runge

Continuing education in discourse studies

One of the most common questions I receive concerns what kinds of steps one can take to keep digging into discourse grammar after finishing reading my Discourse Grammar of the GNT, Levinsohn’s Discourse Features, and interacting with the annotated Lexham Discourse Greek New Testament. There are two prongs to my response. First, you don’t necessarily need to pay tuition to learn. When I left my MTS program I had a list the length of my arm of things I wanted to dig into more deeply. Post-graduation is the perfect time to do that, plus it helps you maintain the scholarly discipline you developed while doing your degree, assuming you formally studies somewhere.

The second part is that there are so many great resources on the Web to help you learn and grow, you should be able to read for quite a while before you run out of stuff to read. The key things will be developing a reading list that progressively orders the material. For instance, I quickly found I lacked the needed background to read Levinsohn’s Discourse Features; he assumed too much background information which I had not yet learned. So I went back and read Dooley and Levinsohn’s Analyzing Discourse. But I again found this assumed too much background based on the terse style of writing. So I decided I needed to read Lambrecht’s Information Structure and Sentence Form, the hardest book I had ever read. I had no background in cognitive linguistics, so I went back and started reading the earlier works of Wallace Chafe that Lambrecht had built upon. I was finally beginning to get somewhere.

How did I find these “prequel” books? By reading the footnotes and bibliography. If someone is doing productive stuff that builds on someone else’s work, then those other works get added to my to-do list. Reading the primary texts on which others have built also ensure that I really understand the original, and not just its application by someone else. This kind of digging is what led to the publication of my article on Porter’s misuse of contrastive substitution. Checking his primary sources of support revealed the contradictions between his arguments and particularly those of Stephen Wallace.

Another way of gaining direction is asking someone. For me it was Stephen Levinsohn and Randall Buth. The former recommended I read an introduction to linguistic typology. And so on it went. For a really long time. Then, after about three years, I had finally come back around full circle to have enough background to read Discourse Features. It was then and there that I swore before God that I would write a book that could serve was an easier prequel path than I had to take. My colleague, Josh Westbury, has also developed a recommended reading list for developing a general background in linguistics. It is not meant to be read in order, but is simply a compilation.

As far as continuing your studies, be sure to note that you can continue learning without necessarily having to pay tuition. When I finished my MTS I had all sorts of questions I’d flagged for later study, ones I did not have time to pursue while writing my thesis. I began digging into these, and also began working through the required reading list of top Semitics programs I dreamt of enrolling in. It all kept me learning, as well as preparing for doctoral studies. The four years of reading allowed me to catch up financially and get a jump on the doctoral level reading I’d need to do. When I had questions, I’d write to scholars I had met to pick their brains. Most were more than happy to either answer my question, or to direct me to the reading that would fill in the hole in my theoretical framework that my question revealed.

There is no precise path forward, the answer to “How do I proceed?” really depends on the kinds of things you want to learn or to be able to do. If you find something interesting, start digging into their bibliography and citations and read those sources. Keep going until you find yourself circling back on the same cited materials. Then move on to another area. It is hard work, but the great thing about learning is that it can be a self-sustaining endeavor, especially when pursued as part of a larger community. For me, my community consisted initially of my springer spaniel and the interactions with scholars at SBL. The Web, Twitter, FaceBook, Boxer and video conferencing have completely changed the playing field. Hopefully you can find an easier path forward now than when I began in 1999.

Oct 13 / Steve Runge

On eclecticism in linguistics

<rant>

I received several comments and inquires of the last few months regarding methodology. These questions stem from claims and criticisms that have been rattling around NT studies for several decades. It has to do with the legitimacy of eclectic approaches compared to the pure application of a system-based approach like that of Systemic Functional Linguistics. The general view I’ve heard espoused has come from SFL practitioners. The general assertion is that eclectic approaches are not just dispreferred, but border on illegitimate. While this may be true for those seeking to build a complete description of language within a single system, but most applied linguists in linguistics-proper are not afraid to adopt proven principles from another, theoretically-compatible approach. Thus they utilize a variety of approaches in order to best tackle the problem at hand.

I’ll begin with a quote from the introduction of Dooley and Levinsohn (2001:iii):

First, we intend it to be practical, addressing issues commonly confronted by field linguists. Rather than attempting to apply a rigid theory or survey a variety of approaches, we provide a methodology that has been refined over years of use. Second, although we follow no rigid theory, we aim for more than a “grab-bag” of diverse methodologies by attempting to present the material within a coherent and productive framework. Specifically, we follow a functional and cognitive approach that seems to be a good approximation of how discourse is actually produced and understood.

The key phrase there is best approximating how discourse is actually produced and understood. This is the focus of applied linguists, not theorizing how language as a whole operates.

Where do linguistic theories come from?

This complaint about eclecticism demonstrates a disregard for the historical development of linguistic theory, including SFL. No linguistic theory has developed ex nihilo, nor do they continue to develop in hermetic isolation. Rather we most often find an evolutionary process at work, with each method building on those who go before. Halliday, Lyons and the early functionalists were reacting against the formalist movements of the day. Specifically, Halliday concentrated on tracing the development path, mapping all the different choices that are made in the production of a discourse as a functional system. His basic assumptions were sound about choice and meaning, about cohesion and coherence, but not all of his ideas panned out. Halliday began his theory by incorporating insights from the Prague School’s Functional Sentence Perspective (FSP), such as the notions of Theme/Rheme, given/new. He also worked with Hasan to make significant claims about coherence and cohesion. But not everything went right.

Halliday’s reformulation of the Prague School notions of theme and theme into his systemic theory led to problems. His claim that “what comes first” is theme doesn’t work outside of English, in languages like Greek, Hebrew, or any other highly inflected language. Similarly, his assignment  of given-new to the “tone group” works okay for English, but breaks down quickly in languages that use some other means than intonation to mark such things. For instance, Japanese uses particles to differentiate P1 from P2, not just intonation. So while there is much to praise about Halliday’s overall assumptions and insights into coherence and cohesion, there was a need to better account for what he mucked up in information structure. This is partly why there has been a split (or coup) within the SFL ranks, which I note in section 9.4 of my Discourse Grammar.

Other responses to the Prague School

There were other theorists besides Halliday working the problems left by the Prague School, such as Paul Grice’s  application of logic to pragmatic choice in contrast to Halliday’s socio-cultural focus. Grice developed a very concise summary of pragmatic “rules of engagement” that offered a better account of how such choices are made in language than Halliday could offer, summarized in Grice’s maxims of conversational implicature. Grice’s maxims were then further refined by Stephen C Levinson from the original seven maxims (which I can’t remember) to three: be brief, be relevant, be perspicuous. Neo-Gricean pragmatics could account for the kinds of socio-cultural influence that Halliday had on his to-do list but for which he offered no means of accomplishing. The pragmatics folks were not concerned with Halliday’s broader questions, but instead with solving the one piece about which they were most passionate. They carried on their specialized work on pragmatics, but those who followed behind took the insights from Gricean pragmatics and added these to the insights from Halliday about the systemic functional nature of language (choice implying meaning, etc.) , along with the Halliday and Hassan insights about cohesion and coherence, and carried on their merry way looking into other nagging issues. Recall that all of this was built on the correct insights from the Prague School.

Then along came Sperber and Wilson with their insight that Grice’s maxims could actually be distilled not just from seven to three principles, but to just one: the principle of relevance. Relevance Theory (RT) sought to re-envision the entire language production and comprehension process based on this one principle. In doing so, they demonstrated that the brain is indeed involved in language production, as Chomsky had theorized. But instead of there being some universal grammar hard-wired in our beings, they found that grammar and language were a natural outworking of our cognitive processing. RT was mostly consumed with English like Halliday, so others needed to adapt and redirect the basic insights in order to account for the typological data and patterns found in other languages.

Some focused on the cognitive aspects of language, including Wallace Chafe, Walter Kintsch, Ronald Langacker, and George Lakoff. Their work predates RT, but all were seeking to account for the same kinds of phenomenon. Chafe and Kintsch wanted to understand what we actually did with discourse was we processed it. Chafe found that we didn’t store the words we read or heard, at least not after the first bit of time had passed. Instead these words somehow became transformed into mental pictures or “representations” of what was processed. Kinsch, in his research into reading comprehension—what we really should be reading to develop a DA methodology for written texts in biblical studies!!!!—found that we iterate between a construction of Chafe’s mental representation and integration of what we already had in there, based on our knowledge of the world, previous experience, etc. Lakoff found that words don’t have meanings so much as meanings have words. When I say the word “dog,” what pops into your mind is likely slightly different from what pops into mine, yet there is an agreed-upon, prototypical range of meaning for “dog.” When this range is stretched, we mark this by calling it “dog-like” or something else. They found that most meanings have fuzzy edges, not the black and white boundaries envisioned by Aristotle . This has led to tremendous development in the area of cognitive linguistics and human comprehension. Nevertheless, it still builds on the bits that Halliday and others got right, but seeks to fix what he and others either got wrong or had no interest in accounting for.

Typologically-informed appproaches

Meanwhile, back at the ranch, there were others who sought to apply the Prague School insights—like Halliday—but outside of English. Simon Dik began looking at minority languages to determine whether theme and rheme were indeed universal across languages. He found they largely were, but Halliday’s account based on English was too idiosyncratic to scale out to non-western, languages. This led to the rise of (Functional) Discourse Grammar (FDG), a typological updating of the Prague School. It was Dik who theorized there were two functional reasons for marked word order represented in preverbal slots P2 and P1 (what I have called emphasis and frames of reference, respectively). They also worked through other issues overlapping with Halliday, but somewhat in isolation from what was going on elsewhere regarding cognitive approaches.

Chafe too was interested in information structure, specifically its intersection with cognitive processing. This led to the development of Construction Grammar, describing set linguistic patterns and functions as “constructions.” Chafe’s greatest disciple, Knud Lambrecht, pulled the pieces together from the Prague School, cognitive linguistics and mental representations, Gricean pragmatics, and the given-new distinction into a coherent account of information structure that offered a satisfying account from the shaping and production of utterances all the way through to the cognitive processing and storage by the hear/reader. Lambrecht’s model has subsequently been adopted almost wholesale in RT, Role and Reference Grammar, and elsewhere. Kintsch’s work in reading comprehension offered independent corroboration of many of Lambrecht’s ideas.

Objectives shape the theory

The differences we find among linguistic methodologies do not stem from one being right and another wrong; they stem from the differing questions that each theory is attempting to answer. SFL focuses among other things on the  socio-cultural factors which shape language use, but not the cognitive processing. As they have need to develop this other area, SFL can either reinvent the wheel or borrow from another theory that has already done the hard work of development. We see happening if we track the other functionalist who came along shortly after Halliday.

Folks like Foley and Van Valin asked a brand new question, partly in response to Halliday’s English-centric problems. What would a typology of language look like if it had not begun with a Western European language like English or German? Role and Reference Grammar (RRG) thus works toward a unified framework that can account for the kinds of things that Halliday first postulated, but without the predisposition to English. RT seeks to do the same thing, but in terms of the basic presupposition of relevance. Construction Grammar (ConstG) seeks to do the same thing. How? By taking the pieces that work from the other approaches, and filling in the the gaps or errors in light of their approach’s specific goals and objectives. They all begin with the formalist approaches of the Prague School that needed improvement, then build on Halliday’s understanding of cohesion and coherence, then build on the work of Chafe and Lambrecht for information structure and mental representations, which in turn all build on Gricean pragmatics and the basic assumption of relevance from RT. All have common roots and common presuppositions, but unique objectives. To the extent that their basic functional presuppositions agree, each can utilize the work of the others to improve the whole while accomplishing their unique objectives.

Consequences of purist theoretical approaches

So what is the problem of eschewing eclecticism to maintain a pure theoretical framework like SFL? Nothing, really. It is a valid choice, but is not without its drawbacks. To begin with, Halliday’s school of SFL has done little to incorporate the critical insights from cognitive linguistics in terms of mental representations, relevance and information structure. Some have maintained their purity to their own detriment as the refuse to incorporate new insights from other approaches. This isolation has also led to a revolt of sorts. The Cardiff school of SFL has broken with Halliday in Sydney on certain issues based upon the latter’s resistance to updating ideas in light of the revolutionary insights.

The purity of SFL within NT studies

Let us return more specifically to the application of SFL to NT studies by Porter. His claim of maintaining theoretical purity stands at odds with his incorporation of concepts from outside SFL. I have found no evidence of contrastive substitution being adopted and reworked within SFL (critiqued here), nor of Porter’s symmetrical approach to markedness theory (critique forthcoming). Both have been borrowed, which sounds like the kind of eclecticism about which he complains. I applaud him for using things that work; every applied linguist worth his salt does the same. My problem lies in Porter claiming to be operating purely within SFL when in fact foundational pieces of his theoretical framework are borrowed. The same could be said of Porter’s incorporation of frequency analysis; I have found no such approach evidenced within Halliday’s SFL. One should not play both sides of the fence.

Advantages of eclecticism

This has been a lousy attempt at narrating the history of functional linguistics, but my point is simple. If you trace any modern theory of language (SFL, RT, RRG, FDG, CogG, ConstG) back to its origins, you will inevitably find  evolutionary eclecticism along the way. Everyone has built on the bits that others have gotten right. The exceptions to this are the theorists, whose approaches necessarily demand that they reinvent the wheel in order assimilate X into their worldview of language. Most everyone else is eclectic to one extent or the other.

Adopting a purist approach necessitates not incorporating proven insights from another theory until it has been assimilated within ones own. Levinsohn and I are too practically motivated to do this. We are applied linguists, not theoretical ones, unless there is a need to do so. If you look at the table of contents from Dooley and Levinsohn’s Analyzing Discourse,  you will find this illustrated. They begin with basic insights from Leech that the number of speakers involved in the production of a discourse impacts the kind of discourse that results. This is then followed up by Longacre’s insights from his Grammar of Discourse into how agent orientation and sequentiality meaningfully explain how different genre’s come about, and why features in one genre might operate differently in another. They then move to Halliday’s insights from SFL into ideolect, style and register, adding another layer of complexity to our understanding of language. Then they shift from the complexities to the specific factors that hold a discourse together. D&L’s discussion of coherence and cohesion begins with Halliday & Hasan (H&H) because they nailed the basic concepts within SFL, but they didn’t provide the practical tools for working out these ideas at the sentence level.  This is where D&L shift to other approaches that do address these practical questions. This is where insights about cognitive processing from Chafe (CogG) on chunking—and Lambrecht (ConstG) on information structure—enter the picture, along with Levinsohn’s MA work on participant reference in Inga.

Levinsohn and I are indeed thoroughly eclectic, but the important point to recognize is that most everyone else doing applied work is as well, based on the evolutionary development of linguistic theory. We still would have been eclectic even if we were only using Construction Grammar or RT. Each has built upon the other.

Disdavantage of purist theoretical approaches

Being a true methodological purist has its drawbacks for the end user. It would have required that I not use proven insights from another approach or field such as cognitive poetics, which is rethinking poetics and literary analysis in light of insights into cognitive processing of language. I would only be able to use such insights after they had been assimilated into my theory.

More importantly for my task at hand, being a non-ecclectic purist would have necessitated that my Discourse Grammar readers learn my method’s jargon and idiosyncrasies rather than the accessible description that eclecticism has offered in the published version. If you wonder why it is easier to understand that some other works in our guild, my eclecticism is partly to blame. The purist approach typically complicates things much more than eclecticism based on the methodological constraints of the theory. Theoretical approaches are generally less applicable than applied ones since the theory constrains the approach rather than the data.

Goals drive our method

The real question to ask is this: What exactly is our goal for linguistic analysis in biblical studies? Is it to develop a unified theory of language, or to functionally describe the features and their contribution to the discourse? Both are legitimate, but I fear the two have been conflated as though the latter necessarily presupposes the former. There is nothing wrong with Porter developing a theory of language to account for the NT corpus, but there are plenty of existing theories besides SFL that may readily and legitimately be applied.

A purist application of Halliday would not necessarily lead to understanding the structure and flow of Romans. Instead it would mean developing a theory of language that could account for  the linguistic artifact we find in the NT book of Romans. Brown and Yule would also likely have a different objective for DA than most NT scholars would desire, since their focus is directed to spoken discourse rather than written. In my view, Walter Kintsch’s work on reading comprehension is really where we should be focusing, as it is thoroughly up to date in terms of cognitive processing, and it is specifically focused on the comprehension of written discourse. Spoiler alert: this is the direction I will be heading in 2015 in my volume on moving from discourse grammar to discourse analysis.

One final point: If I wanted to be a purist, I could have reformulated most all of my descriptions of discourse features under the auspices of RT, ConstG, CogG, RRG, and FDG, so long as I adapted to their idiosyncrasies. All are in basic agreement in terms of fundamental presuppositions, which is why I could operate within any one. However, the differing objectives, like RRG imagining a non-Western typology, necessarily lead to differences in the approaches. So eclecticism is not as reckless or strange as Porter has portrayed it, nor is methodological purity the only legitimate linguistic option. The reality is there is no perfect framework; each is focused on a different part of the same overall puzzle.

</rant>

<edit>

I trust that Dr. Porter is well aware of the principles and history that I describe above. I believe that he has framed the eclectic vs. pure theory issue in this way as a rhetorical advantage. Since 1989, he has been writing for an audience within our guild that is largely ignorant of the broader field of linguistics. People have taken his claims at face value assuming they accurately reflect the broader discipline outside NT studies. Those of us who have read more widely in linguistics proper recognize the advantage he has gained in framing things this way. The point of this post is to offer an alternative view on the matter.

If Porter wants to invest his energy in theorizing about the language of the GNT, there is nothing wrong with this. My calling is to provide something that has a more practical payoff for the guild. Denigrating eclectic linguistic approaches as illegitimate demonstrates an ignorance of how modern linguistic theory, especially functional theory, has developed. Keep that in mind as you hear this argument put forward in San Diego next month.

</edit>

Apr 15 / Steve Runge

On light and darkness

John 1:4–5: “In him was life, and the life was the light of humanity. And the light shines in the darkness, and the darkness did not overcome it.”

This is a very familiar text, one which any Christian would readily affirm. This is indeed what Jesus has accomplished through His death and resurrection from the dead. Nevertheless, we see mature believers lose sight of this truth. Why? How? Well, there seems to be a competing reality, one which is false yet still carries significant weight at times. This alternate reality might simply be termed “darkness.”

What do I mean by darkness? It can be circumstances that just appear insurmountable, as though there is simply no hope for any kind of meaningful change or respite. It might take the form of unbearable disappointment, having hopes and dreams crushed into oblivion where there is seemingly no possibility of anything good could come of it. It can stem from the shame and regret associated with sin, when the desire to change is countered by the humiliation and pain that restitution and reconciliation seem to require. In each case, one is led to believe that living in the darkness is the only viable option.

Jesus’ incarnation shined a light in the darkness, but it did not make the darkness go away. He has overcome the darkness, but this doesn’t mean that it no longer has any power.   Jesus has set us free from the power of sin and darkness, but we still face the challenge of turning away from both in order to follow Him. Sin and darkness only have the power we give them. As we choose not to set our mind on. Here’s how Paul phrases it in Romans 8:5-13:
For those who are living according to the flesh are intent on the things of the flesh, but those who are living according to the Spirit are intent on the things of the Spirit. For the mindset of the flesh is death, but the mindset of the Spirit is life and peace, because the mindset of the flesh is enmity toward God, for it is not subjected to the law of God, for it is not able to do soand those who are in the flesh are not able to please God. But you are not in the flesh but in the Spirit, if indeed the Spirit of God lives in you. But if anyone does not have the Spirit of Christ, this person does not belong to him10 But if Christ is in you, the body is dead because of sin, but the Spirit is life because of righteousness. 11 And if the Spirit of the one who raised Jesus from the dead lives in you, the one who raised Christ Jesus from the dead will also make alive your mortal bodies through his Spirit who lives in you. 12 So then, brothers, we are obligated not to the flesh, to live according to the flesh. 13 For if you live according to the flesh, you are going to die, but if by the Spirit you put to death the deeds of the body, you will live. (LEB)
Where we chose to set our mind is a matter of life and death. The mind set on the flesh has only one outcome: death. I do not believe Paul here only pictures indulgent, lust-filled living. Rather Paul repeatedly talks about the need for having our minds renewed, the need to fix our focus on thing above instead of things around us (Rom 12:2; Gal 5;16–18; Phil 4:4–9). As we choose not to rejoice, not to meditate on what is true and honorable and pure, to set on mind on the flesh rather than the spirit, figuratively speaking we are turning our back on the light and returning to darkness.
Jesus is indeed the light of the world, and we will celebrate this on Easter Sunday. But we cannot forget the reality of the darkness that remains. Jesus has set us free from the power of sin and darkness, but we still have the option of giving both power they should no longer have. Darkness is still darkness, and it still leads to death. If our mind is set on our perspective of our circumstances, on our perspective of disappointment, on our inability to see a possible way forward, we are exchanging darkness for light.
On Saturday afternoon I received word a friend, a pastor with whom I’d served through some rough patches in ministry, took his own life. I do not know what he was thinking or why he did it, nor will I likely ever know. I do not believe there was some great sin lurking in his closet. But I can’t help but think that somewhere along the way he allowed darkness to take the place of light in small ways. His decision to end his life may have come quickly, but the underlying causes that led him to this decision were most likely a slow progression. I know this is a bit out of context, but I don’t think Jesus would mind: “Therefore if the light in you is darkness, how great is the darkness!” (Matt 6:23). Unthinkable decisions begin to look like viable options in the light of darkness, especially where the light really is darkness.
I wish what happened last week could be called an anomaly, but I have seen it repeated. In fact the guy who led me to Christ in 1985, who went on to serve as a pastor, also ended his life some years back. While some around me reacted angrily at these decisions, I found it hard not to be empathetic. Twice I have had medical crises disable me from working, where the combination of medical bills and no income brought on a darkness so thick it seemed impenetrable. I have had hopes crushed near the conclusion of a long path of hard work that made it all seem in vain. I have been ashamed by the consequences of sin, left wondering if there is sufficient grace and love to possibly rebuild what my choices had destroyed. All of these were the darkest of moments in my life. As I isolated myself, as I sought to find a way forward by my own understanding, I found the darkness overwhelming me. Why? Because the decisions to isolate and depend on myself in reality were decisions to turn away from the Light of life.
When we allow circumstances and situations to rule our lives and shape our perceptions of things, we are not walking in the light. “If the light in you is darkness, how great is the darkness?” Very great indeed. If we stand alone, we will fall alone. 1 John 1:5–7 offers a better way forward:
“And this is the message which we have heard from him and announce to you, that God is light and there is no darkness in him at all. If we say that we have fellowship with him and walk in the darkness, we lie and do not practice the truth. But if we walk in the light as he is in the light, we have fellowship with one another, and the blood of Jesus his Son cleanses us from all sin.”
The darkness has already wrought enough havoc. If there are areas of darkness you have made peace with instead of turning away, its time to put an end to it. If circumstances are too hopeless to bear, then stop trying to bear them alone. I cannot offer you promises of quick fixes without any consequences. All I can do is give testimony of God’s faithful shepherding of me out of these dark patches back into the light. Yes, they were patches that did not stretch to infinity and beyond, despite feelings to the contrary at the time. This was accomplished in large part through the ministry of other believers in my life, not alone.
Let someone in. Ask for help. Do not give the darkness power it no longer should have.
Jan 30 / Steve Runge

Summer internships in Greek Discourse Grammar

I have gained approval to try something that I have wanted to do for years: offering summer internships at Logos. Many have asked about opportunities to study with me, and this as good as it will get. I’ll mentor interns in their synthesis of discourse features into a unified reading of a NT book or portion of a book. Publication of a discourse handbook would be the end result of the research.

I have had the privilege of doing some intensive teaching as a visiting professor, as well as mentoring a few poor souls at a distance. Although this has served its purpose, there is nothing like intensive collaboration for gaining applied knowledge. There is also nothing like comprehensive application to shake your theoretical framework to the core and identify areas that need attention. A summer internship is the best means I could think of to make this possible.

If you want to learn how to analyze a book, how to shape ideas into a research proposal, how to weigh the impact of one feature against another, there will be no better opportunity than bringing all your knowledge to bear in this summer internship. The research from the summer should lay much of the groundwork needed for outlining a doctoral proposal. It would also form the basis for ongoing mentoring at a distance through your dissertation.

While this internship offers the opportunity to develop your research skills and theoretical framework, it is primarily about writing up what you have found. In fact, your ability to clearly and succinctly describe the features of the text is of the utmost importance. The internship is a writing gig; research skills and other benefits are simply a natural consequence of the analysis and writing. This means you need more writing experience than exegetical papers for school, or even technical articles.

If you have read my work, you know I am passionate about making things accessible to non-specialists. As a discourse intern the same would be expected of you, and you’ll learn new ways to do it. This means blogging  about grammar or language is likely the best preparation, besides having mastered the discourse grammar material. Below is the text that will appear in the ad at Logos, just not exactly sure when:

 

Greek Discourse Grammar Internships

Logos Bible Software is seeking highly qualified candidates for Greek Discourse Grammar Internships this summer.  Successful candidates will have mastered the concepts described in Discourse Grammar of the Greek New Testament, and will assist in the development of exegetical handbooks which help pastors and students better understand the exegetical implications of discourse features annotated in the Lexham Discourse Greek New Testament.  Interns will work directly with Dr. Steven E. Runge as part of the Logos Discourse Team, providing an unparalleled opportunity to develop the skills and theoretical framework needed for advanced research in the field of NT discourse analysis and discourse grammar.

Responsibilities

  • Describe how the various discourse features annotated in the Lexham Discourse Greek New Testament contribute to the overall flow of a NT writer’s message using prose accessible to non-specialists. Your specific project will be determined in consultation with Dr. Runge. Writing skills are as important as knowledge of discourse grammar.
  • Ability to work as part of a collaborative team.

Requirements

  • Summer relocation to Bellingham (non-negotiable)
  • Ability to synthesize the exegetical implications of a writer’s choice to use various discourse features, and to describe their contribution to the overall flow of the discourse.
  • Ability to succinctly and accessibly describe technical linguistic features for readers with a traditional background in Greek.
  • Two Years of Greek, completed MA/MDiv (or equivalent) preferred.
  • Has mastered the concepts described in Discourse Grammar of the Greek New Testament.

The ideal candidate

Application Process

Please submit a CV, a letter of interest describing your background in discourse grammar, and a writing sample (or hyperlink to specific blog posts) to devjobs@logos.com. Applications are due by March 15, 2014. For more information about Logos, see https://www.logos.com/about/careers.

 

Why bother coming all the way to beautiful Bellingham during the very best season of the year? Here are a few reasons:

  1. Learning while being paid a modest wage (with a modest relocation allowance) instead of paying tuition.
  2. Opportunity to practically and intensively apply a tested theoretical framework.
  3. The chance to formulate a research proposal for a future dissertation project, and to develop the working relationship needed for ongoing mentoring in your research.
  4. A publication credit.

 

Aug 24 / Steve Runge

Whence a tenseless Greek Indicative?

Within NT studies the notion that the Greek verb lacks tense/temporal reference has become fairly accepted. If we compare this tenseless view of Greek with what has been claimed by every linguist and grammarian Porter cites in his research, you might scratch your head a bit. Why? Not one of them argues that Greek lacks tense. The linguists like Lyons, Comrie, Wallace and Haspelmath treat Greek as a mixed system with tense, aspect and mood all present in the indicative.

If it is true that the broader field of linguistics has treated Greek as having both tense and aspect, where did the “tenseless” idea come from? Why did it come about? I do not really know the whole story, but it seems that Porter was seeking to account for the incorrect claims of mainly commentators–not grammarians1–some of whom treated Greek verbs as though they had absolute temporal reference. It also seems that he was seeking to set his work apart from what was becoming a crowded field, claiming something no one else had claimed before, viz. that the Greek indicative lacked any temporal semantics.

But again, why was there a need to claim a total lack of temporal reference? In my last post I highlighted the areas of significant consensus between Porter and myself. However, Porter added two significant claims to his dissertation, claims not found either in biblical studies or in linguistics: a tenseless view of the verb and a semantic weighting/prominence view of the verb. Both of these proposals lack support or motivation from the field of linguistics. They are essentially rhetorical inventions originating from Porter’s dissertation.

The basic premise of the timeless view is to not just argue against the presence of absolute time/tense in the verb in favor of aspect. Rather it completely rejects the notion that the Greek verb conveys any temporal semantics in the indicative. The most compelling data for this is the multivariate use of the Present, attested by the statistics gathered in Decker’s Temporal Deixis of the Greek Verb in the Gospel of Mark with Reference to Verbal Aspect. Note the almost equal distribution of the present tense-form in past, present and future temporal contexts, not to mention the timeless/atemporal uses, cited from my HP article (p. 215):

Table 1

“The Verbal Aspect of the Historical Present Indicative in Narrative,” p. 215

As you can see, the Present tense-form shows the most damning distribution when it comes to the traditional understanding of it referring to present time.

If you have read much of my work, you will have heard me harp on the importance of one’s theoretical framework. This lesson was thankfully beaten into my head by Larry Perkins, Stephen Levinsohn and Christo Van der Merwe; it has saved me from ruin on a number of occasions, and held the key to unlocking sticky problems. The strange distribution of the Present indicative is one of them.

There are two widely accepted principles that were ignored by Porter and those who have adopted his model. The first is the fact that most every Indo-European language–which would includes English, Greek, German, etc–has what linguists call a past/non-past distinction, rather than the past/present/future distinction presupposed by Porter. This means that the Present tense-form in these languages doesn’t exclusively refer to the present, but rather more broadly to the non-past.

For example, I could say “I am eating dinner with Bob [Monday]” and have either a present or a future meaning depending on the presence or absence of the adverb “Monday.” So too with Greek. This means that the “futuristic presents” are not anomalous, but are behaving like  a good Indo-European language would be expected to behave. The failure to incorporate a past/non-past principle into his framework led Porter and those who have followed him to misconstrue the data.

The second principle missing from his framework was treating the historical present as a pragmatic usage rather than as prototypical. The numbers above treat the past use of the Present as though this is part of its basic semantic meaning, rather than as a pragmatic highlighting device based on the mismatch of tense and aspect to the narrative context.You’ll need to read the paper for the full argument.

Recognizing the past/non-past distinction, and treating the historical present as a pragmatic device–just as both traditional grammarians and linguists have done for decades–changes what originally seemed like a mess into something quite a bit tidier. Here is the updated table from my article.

  • The HP usage is excluded, based on it not representing the basic semantics of the Present.
  • The future reference is part of the core semantics, based on the non-past reference.
  • The temporally undefined data tells us nothing about the temporal reference. It says the form was chosen based on the aspect that it conveyed in a timeless/atemporal context. It should thus either be included in the core usage, or excluded as not telling us anything about temporal reference.
Table 2

“The Verbal Aspect of the Historical Present Indicative in Narrative,” p. 216

In either case, the Present indicative forms in Mark render a 99% consistency in usage with what would be expected of an Indo-European tense form.

So had the widely-accepted linguistic notions of Greek having a past/non-past temporal distinction and of the historical present being a pragmatic usage been incorporated into Porter’s theoretical framework, there would have been no basis for making a tenseless/timeless argument in Greek. There likely would not have been much of a Porter/Fanning debate, and our field would not have squandered the last 20 years arguing about a linguistically unsound proposal.

One’s presuppositions play a huge role in determining outcomes. Beware.

  1. See Mike Aubrey’s comments here and here. The older grammarians were not as wrong as Porter has made them out to be. The key is not to judge them anachronistically
Aug 17 / Steve Runge

Aspect: Areas of Agreement

A great point was raised on Facebook yesterday that deserves a bit more consideration. The comment came from a NT scholar looking in as an outsider to the debate about the Greek verb. There seems to be this notion that if one rejects the idea of a timeless verb in Greek, then one must also reject the idea that it conveys verbal aspect. Thus, to do so dooms one to become re-enslaved to viewing the verb as conveying absolute tense and Aktionsart, the very things Frank Stagg fought against. DOOM, DOOM!

Well folks, I am here to tell you that this is not really the case. Regardless of the hype, there is actually quite a bit more consensus on these issues than you might think. If you like Porter’s taxonomy then we have something in common; I like it too. Here is what I mean:

  1. Greek tense-forms convey perfective, imperfective, or a third kind of tense/aspect.
  2. The aspects are present in every mood, whereas tense (“spatial proximity/remoteness” for you timeless folks) is only found in the indicative mood.
  3. The aorist conveys perfective aspect, the present and imperfect convey imperfective aspect, and the perfect and pluperfect convey a third thing. Porter calls it stative aspect, which I can live with.1

On these issues I have sided with Porter’s taxonomy, both on the web and in print. Other than the quibbling over what to call the perfect, there is a high degree of consensus regarding perfective and imperfective aspect, and how the Greek tense-forms align with them.

So how did people get the impression that if you reject some portion of Porter’s framework that you are rejecting it all? Well, he has framed it as an all-or-nothing proposition. He cast things as though standing in opposition to his ideas is to argue in favor of a “once for all time” aorist and so on. This is rhetorical scare tactic, but it has proven surprisingly effective. There does not seem to be another viable option available.

There is widespread consensus on these issues, save what to do with the perfect.2 Had Porter stopped here, there quite likely never would have been a Porter-Fanning Debate. Rather, it would have been something more like a Porter-Fanning Report on Aspect, and the field would have quietly continued working out the remaining issues of the next twenty years. However, this was not the case.

Instead of the field being able to move forward with a basic consensus about the Greek verb conveying a combination of tense and aspect in the indicative, and aspect-only in the non-indicative, we have had twenty-plus years of arguments based on two proposals put forward by Porter: a tenseless view of the verb and a semantic weighting/prominence view of the verb. These will be covered future posts.

  1. Campbell considers the perfect another kind of imperfective aspect, which is half right. Fanning calls it a combination of things, which is understandable. No worries, the linguistic field itself does not yet have a consensus about the perfect, but it is getting closer.
  2. And I think that we will find consensus in November that Campbell’s “imperfective Perfect” is wrong.
Aug 16 / Steve Runge

Civility in Academic Debate

I have received several comments regarding my post yesterday that almost read like condolences. I understand the seriousness of the implications of my paper. This is precisely why I took the steps over the past few years to see if there was another way forward for generating engagement than publishing a deconstruction.

But there is another step that I took that I do not want to be overlooked: pre-peer review. Researching the issues surrounding the verbal aspect debate has been an exercise in anger management at times. The failure to engage counter arguments or to acknowledge widely accepted principles (e.g. the past/non-past distinction in most Indo-European langauges) is intensely frustrating to me. Why? These missteps did not need to happen; all were preventable based on the literature that is cited. All that was needed was a willingness to fully engage it. Frustrating.

Unfortunately, this frustration found its way into early drafts of my paper. I tried to keep it civil, but unnecessarily “emotive” language came through. So what do you do? You ask for help from people who are smarter than you.

In my case, John Barry did a developmental edit of the paper to make it more concise. Stephen Carlson, Rick Brannan, Mike Aubrey, Josh Westbury, Mike Heiser, Dirk Jongkind, Chris Fresch and several others critiqued the paper before submission. Each scholar ended up commenting on different aspects of the paper, making the final product much stronger than the initial draft. Most importantly, each one called me out when I used unnecessarily inflammatory language that would distract from the main argument.

Not every paper warrants this level of preliminary critique. From what I have know of peer review for journals, making such comments is beyond the scope of standard peer review. Mostly the veracity and coherence of the argument is weighed, not necessarily the language used.

I am very thankful that these folks were willing to invest the time and effort to critique my work. To be honest, it was painful reading when it came back (“Is it really necessary to…?”), but they noted things I most certainly had missed. It is quite easy to let anger and frustration get the better of you. In fact, it seems the Academy gravitates toward fostering cage-match panels as a means of attracting audiences.

But one overarching question remains to be answered. Why? Why all the posturing and belittling? Why the use of condescending language?

At what point did we set aside the pursuit of greater understanding for kingdom building? Why the fortresses encircled with trenches and barbed wire? I thought we had learned from WWI that even though there is a technical winner, everyone loses in the end. Perhaps the Cold War doctrine of Mutually Assured Destruction is the more accurate analogy, where people use fear as a deterrent.

I’d like to think that a formal discussion could be convened at some point where the issues raised could be considered by those without a dog in the fight. It would be great to have someone other than an interdisciplinary NT scholar–a bonafide PhD specialist in Systemic Functional Linguistics from outside our field like Mick O’Donnell, for instance–to come and weigh in on the matter. Cage matches have some measure of entertainment, but they most often end up being a distraction from the real issues. It sure worked for Commodus in Gladiator, at least for a little while.

I am not so naive as to think that scholarship is an intrinsically pure enterprise. I accept  posturing, marketing, showmanship and gamesmanship will play inevitable roles. But I am  NOT willing to concede that vitriol and ad hominem attacks should just be accepted as part of the process. Is this really what biblical scholarship is all about? Really? The comments I received yesterday made it sound like a foregone conclusion, and maybe it is.

Call me a naive idealist, but I am more interested in getting things right than in being right. I don’t like being wrong any more than the next guy. But if research is done properly and reviewed properly, one can usually end up with both. It will be interesting to see what happens.

Aug 15 / Steve Runge

Porter’s Use of Contrastive Substitution

At the 2010 ETS meeting I presented an overview of some foundational errors in Stan Porter’s theoretical framework that significantly undermine the validity of his claims regarding the Greek verb. These issues initially came to light in research for my 2009 paper on the historical present.What I read left me with a knot in my stomach. Why? Well, Stan taught me second year Greek while I served as a TA for his first year Greek class at TWU. He was one of the folks who got me interested in linguistics in the first place, and he published my first article on Greek in one of his journals. I owe him a lot.

What was the big deal? The nature of the problems suggested a failure to adequately engage the linguistics literature. Significant counter arguments were ignored, as were warnings which should have led him to reach opposite conclusions about the presence of temporal reference in the Greek indicative tense-forms. One of the most significant pieces of evidence is the work of Stephen C. Wallace. I have posted his article, which is quoted at length in my critique. I would strongly encourage you to read it in its entirety.These problems were not just in his dissertation, but also in his recent writings on the prominence of the Greek tense-forms.

I tried to begin a dialogue within the Biblical Greek Language and Linguistics community by presenting these papers. The research leading up to each became a series of blog posts outlining the problems. Finally in 2011 I met with Stan personally to present my concerns during the SBL meeting in San Francisco. My hope was that some sort of dialogue could be arranged, whether a panel discussion or something more private. He made clear that he only engaged such things after they appeared in print. This explained the lack of engagement to my papers or blog posts.

Print is a rather permanent medium, hence my reluctance to write a critical review. My hope had been that a way forward could be found that would allow Porter to retain his prestige as the promoter of verbal aspect in NT studies, but which would also allow needed corrections to be made. We ended things in 2011 with me facing the challenge of getting a paper published. For various reasons I did not pursue the issue any further.

Another year passed. I saw and read Porter chiding scholars for lacking what he considers to be sufficient linguistic training for undertaking interdisciplinary research. The latest example of this is found on the pages of JETS 56 (1): 94-95 in Porter’s response to Wallace’s response to Porter’s book review.

Stan is correct to point out that interdisciplinary work bridging from biblical studies into linguistics always carries with it the risk that one’s background in the secondary field is insufficient to support the level of research undertaken, but that cuts both ways. He and I are also interdisciplinary scholars, susceptible to the same kinds of problems stemming from overreaching our background.

My critique has been accepted for publication, but will not be available until next year. It was submitted only after allowing three years for more productive (and more discrete) engagement to come about. It is a strict deconstruction (which you’ll know is not what I do), but felt I had little choice. Yet another panel discussion on verbal aspect looms at SBL 2013, with little indication that any progress will be made.

Here is the introduction:

Interdisciplinary approaches to NT issues have become increasingly popular, utilizing insights from other fields to tackle nagging problems within our field. One of the more popular approaches in Koiné Greek is the application of linguistics to problems not adequately addressed by grammarians and philologists within the guild. However, interdisciplinary work is a double-edged sword: it can have (and has had) great benefits, but only as it is employed in methodologically sound ways. The split focus demands that the scholar be a specialist in multiple disciplines, and that there is rigorous peer-review from both fields. Inadequate engagement with the secondary field can have grave consequences.

Such appears to be the case in Stanley Porter’s application of Systemic Functional Linguistics in his Verbal Aspect in the Greek of the New Testament, with Reference to Tense and Mood and his continuing work on verbal aspect and discourse prominence. Despite the fifty page bibliography, Porter’s seminal volume offers scant theoretical or methodological substantiation for the claims that are most crucial to his argument that the Greek verb does not encode temporal information. Porter introduces concepts like contrastive substitution, semantic weight, and frontground without providing the requisite theoretical grounding or discussion of methodological constraints governing their legitimate usage. This article is limited to contrastive substitution, but the comments that follow may be applied more broadly to his use of markedness and grounding.

Research conducted for a separate project identified a significant counterargument from one of Porter’s frequently cited articles that he fails to engage or even acknowledge. Skepticism about his claims leveled by Silva and others suggested that a thorough comparison of Porter’s claims with the linguistic literature cited as support was called for. This comparison revealed his use of contrastive substitution to be nothing more than a straw-man argument against temporal reference in the Greek verb. In order to avoid anachronism, this critique weighs Porter’s claims only against his cited literature to demonstrate his failure to develop a linguistically sound methodological framework. Reference to more recent linguistic work is reserved for demonstrating that knowledge of these issues has not fundamentally changed to lend any new credence to his claims. Thus the numerous warnings from Porter’s primary literature against the veracity of his thesis that Greek verbs lack temporal reference are ignored rather than engaged. (read more)

Note: If you want to link from your blog or other media source back here, please do NOT just link to the article. Rather, link to the post so there is at least opportunity for folks to understand the history behind this paper.

Jan 20 / Steve Runge

Meaningful distinction between ἀλλά and εἰ μή, pt. 2

Making a clear distinction in meaning between function words like ἀλλά and εἰ μή can be difficult. They are function words providing instructions about how to relate two textual elements. Our tendency is to link them to the closest counterpart in English, but this means we are understanding them through a perhaps inaccurate filter. This is why I have found the idea of cognitive constraints to be so useful; it removes the need for assigning a gloss and focuses on what the word signals. In the last post, I summarized:

Both exceptive and restrictive constructions have something in common. Both constructions signal that what follows the ἀλλά and εἰ μή (or English rather and except) is to correct or replace some comparable element in the preceding context. It qualifies the preceding statements by adding some other piece of information. This piece of information either corrects some overstatement or replaces one option for another, in the case of ἀλλά.

The last post focused on the function of ἀλλά to correct or replace. So what exactly is the difference between these two connectives if they both do this replacing thing? The key distinction between them is whether the correction/replacement was part of the preceding set or not. Here’s what I mean.

In the case of ἀλλά, the following statement introduces information that was not present in the original statement. The original statement had one or more elements that were incorrect or incomplete, and ἀλλά introduces a new element that was not under consideration. This new element can be added to the preceding set to make it complete or correct (i.e. correcting) or it may stand in the place of some incorrect element(s) (i.e. replacing). Here is what it looks like graphically.

Corrected diagram

 Let’s take another look at an example from the last post

1 Peter 3:13–16 (SBLGNT)

13 Καὶ τίς ὁ κακώσων ὑμᾶς ἐὰν τοῦ ἀγαθοῦ ζηλωταὶ γένησθε; 14 ἀλλʼ εἰ καὶ πάσχοιτε διὰ δικαιοσύνην, μακάριοι. τὸν δὲ φόβον αὐτῶν μὴ φοβηθῆτε μηδὲ ταραχθῆτε,

15 κύριον δὲ τὸν Χριστὸν ἁγιάσατε ἐν ταῖς καρδίαις ὑμῶν, ἕτοιμοι ἀεὶ πρὸς ἀπολογίαν παντὶ τῷ αἰτοῦντι ὑμᾶς λόγον περὶ τῆς ἐν ὑμῖν ἐλπίδος, 16 ἀλλὰ μετὰ πραΰτητος καὶ φόβου, συνείδησιν ἔχοντες ἀγαθήν, ἵνα ἐν ᾧ καταλαλεῖσθε καταισχυνθῶσιν οἱ ἐπηρεάζοντες ὑμῶν τὴν ἀγαθὴν ἐν Χριστῷ ἀναστροφήν.

Verse 13 asks who there is who’d harm you for being zealous, with the implied answer being “no one.” Verse 14 now introduces a new element, the idea that such a person really does exist. The ascensive καί casts this hypothetical person as though it were a “least likely possibility” comparable to our use of “even” in English. So in terms of the diagram above, the Xs stand for the people who’d harm you for being zealous. Are there any? No, well, maybe. The Y is adding a caveat that even if they do exist and this does happen, you shouldn’t let them affect your behavior.

The second one is a bit more tricky, since the ground rules for how you make your defense are not explicitly mentioned in v. 15. Nevertheless, v. 16 introduces a new element that (most likely) was not under consideration by the hearer. If Peter was merely adding another thing to do (“Oh, and be sure to do it with gentleness and reverence”), whatever negative thing that he was trying to prevent would still have been admitted. The use of ἀλλά constrains what follows to replace whatever negative behavior the preceding may have conjured up, defined as the opposite of gentle and reverent.

So although ἀλλά and εἰ μή both do the same sort of thing, and both can quite often be translated using a generic “but,” they nevertheless have a meaningful distinction which differentiates them from one another. The distinction is present in Greek regardless of how it might be translated. I feel like this point is dismissed by some using a sense-based explanation: “It’s the A sense of the word in this context, not the B sense.” Regardless of the translation used, the same distinction between them will be present.

This means that Paul’s use of εἰ μή in Gal 2:16 rather than ἀλλά  was meaningful and intended. The lack of textual variants for this reading also points toward Paul intending to communicate something with εἰ μή that would not have been conveyed with ἀλλά. I’ll move on to εἰ μή in the next post.

Understanding grammar is a double-edged sword. It can improve the precision of our exegesis and understanding of the text. But it also makes it much harder to hide from what appear to be unpleasant things. If Paul had intended for “works of the law” to have been replaced by “through faith in Jesus Christ,” he surely could have done so using ἀλλά. To be blunt, I think a lot of reformed folks would have preferred Paul has used ἀλλά, but he didn’t. An unpleasant thing this, but not the end of the world that some have made it out to be.

Instead Paul used εἰ μή in order to bring about some constraint that ἀλλά would not have achieved. Translating with the generic “but” in English–which can convey either constraint–only masks what is going on. I am not arguing here for the proper gloss, but for a recognition of what is going on in Greek.

Next post will spell out the constraint of εἰ μή and pave the way for heading back to Gal 2:15-16.