Majority Text The True Power of the Probability Argument

Discussion in 'Other Christian Denominations' started by Nazaroo, Apr 24, 2011.

  1. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    While as far back as the mid-1800s, textual critics had a natural sense of the value of the quantity as well as the quality of witnesses to the text, the concept wasn't put on firm mathematical ground until it was faced squarely by Wilber F. Pickering in his book, The Identity of the NT Text (Nelson, 1977/80), in an Appendix C, "The Implications of Statistical Probability...", actually written by Zane/David Hodges. This book is freely available for viewing and download on Pickering's site here. - (click to read).

    There, Hodges argued that probability was decidedly in favor of the Majority text (the readings found in the majority of surviving manuscripts).
    This was almost immediately challenged by D. A. Carson in his own appendix to The KJV Debate (Baker, 1979/80), a review of Pickering's book.

    Although the original diagrams and equations are complex for ordinary readers, the gist of the argument can be simply illustrated as follows:

    [​IMG]

    (1) If each manuscript is copied more than once, then there will always be more copies in each following generation. In the picture above, each row represents a copying generation, and the number of manuscripts doubles each generation. The copies form a simple, ever expanding genealogical tree, as in the diagram above.

    [​IMG]

    (2) An error cannot be copied backward in time, so each error can only influence the copies which come after it, not those already written. Even the act of mixture cannot change this fundamental fact.

    (3) The manuscripts with the given error will be in the minority. The later the error, the smaller the minority. Even just 2 or 3 generations later, errors quickly become clear minority readings. (The diagram above poses minimal reproduction, and a 3rd generation error is stuck with about 25% attestation. )

    [​IMG]

    (4) Error accumulation is a self-limiting process, and later errors have little chance of influencing the text at all, even when preferred and adopted. For instance, by the 10th generation, it is impossible to introduce significant errors into the copying stream, even with minimal reproduction rates.


    The Assumptions of the Model:

    The basic assumptions of this model are that it is a reasonably "normal" copying process. Very little regulation is required for the model to be overwhelmingly accurate regarding the basic process of error accumulation. To be functional and predictive, the model only makes a few assumptions:
    a) Most manuscripts should be copied more than once. It is not even necessary that all manuscripts be copied. The process is very robust and allows for a wide variation in rates and numbers.

    b) The relative rates of copying should be moderately equal in each generation, for most branches. That is, one manuscript should not be copied too many more times than the others. Again the process is robust, and difficult to skew or break. ​
    It is important to understand that this model of manuscript reproduction is just a scientific physical description, and completely neutral as to the causes of errors in the transmission stream. For this discussion, "error" does not signify any intent or lack of same on the part of the copyist or editor. It only signifies physical variance from the original text. The model is not influenced in the least by the motives of copyists or editors, and does not concern itself at all with how a variation in the text is introduced. It only makes universal assumptions about the mechanics of copying and errors in transmission.

    Critiques and objections to this model center around whether or not the transmission of the NT text really was "normal" in the sense described by the model. D. A. Carson's objection for instance, is based on three points:
    (a) Historical factors skewed the results, allowing the dominance of a less accurate text (the Byzantine). He cites (1) the influence of Chrysostom, (2) the restriction and displacement of the Greek language. Because of this he argues, the Byzantine text probably doesn't represent the original text.

    (b) The 'generational' argument fails because errors were not introduced "generation by generation, but wholesale, in the first 2 centuries". Additionally, pressures to make the text uniform make the argument about most errors being minority readings null.

    (c) Catastrophes during transmission negate the predictions. Carson uses the "flood" analogy to say that transmission trees can be 'restarted' from bad copies and previous evidence lost. Presumably then, observations cannot be extrapolated back to pre-catastrophe conditions.

    (d) Early Christian copyists were inferior to Jewish scribes. Carson argues that therefore they were careless with the text. The majority of variants were early and accidental. ​
    Carson's objections however, are essentially a failure.

    (1) The probability argument, as already stated, is independent of the motives or causes of corruption. It is only a statement about the physical process itself.
    This fact eliminates the basic objections found in (a).

    (2) Far from contradicting the fact that most errors with significant attestation are early, the model actually predicts this. Scrivener and Colwell may have found it (psychologically) 'paradoxical' but a mathematician doesn't. The model is independent of objection (b) and (d). Even if more than one error at a time is introduced in each copy (very likely), this only means that each group of errors can be treated as a single corruption event or 'variation unit'. It makes no difference to the model, or its general predictions.

    (3) If Carson is going to posit a 'catastrophic event' (like a major recension, and an accompanying slash and burn of all other contemporary copies), then he has to actually show historical evidence of such a historical event. Even a major recension cannot significantly alter the model, unless we add the destruction of most other unedited copies, and add the cooperation of all parts of Christendom (in the 4th century, the time when presumably this must placed), and also the large scale reproduction of the new substitute text. Neither Hort nor Carson have ever produced the historical evidence that such a catastrophic event took place.

    (4) As to the 'carelessness' or lack of talent of early Christian scribes, this also has no effect on the model; it only affects the average rate of errors introduced to the text per generation. Carson has failed to grasp the essential features of the model of normal transmission, which is unaffected by varying rates of error.

    We will show the true problems, and limits of the probability model in a second post.

    (to be continued...)

    peace
    Nazaroo
     
  2. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    In the last post we examined the basic premise behind the idea that the majority of manuscripts would usually have the correct reading, and that any particular error introduced later on downstream would be a minority reading.

    [​IMG]

    This was known long before the time of Hort, and those proposing minority readings were conscious of having to counter the a priori weight of the majority of manuscripts.
    "Had we reason to believe that all these authorities were of equal value, our course would be a simple one....we should simply reckon up the number upon opposing sides, and pronounce our verdict according to the numerical majority. ...however, ... in a court of justice ... evidence given by different witnesses differs [greatly]. ... to merely [count] our witnesses will not do. We must distinguish their individual values. ...Were we to be guided by the number of witnesses [only] on either side, we would at once have to favour of the Received Text."
    - W. Milligan ( The Words of the NT, 1873) ​
    While insisting on the need for weighing witnesses, Milligan here openly concedes what everyone knows: Most readings in the Textus Receptus (TR) are supported by an overwhelming majority of manuscripts.


    Milligan's own proposals avoid any direct attempt to debate the value of landslide majority readings. Instead, he uses a crude procedure of dividing MSS into 'early' and 'late': His fundamental axiom is that older manuscripts and their readings are better. From this universal assumption, all 'early' MSS and their readings are simply given a priori preference over their numerically vastly superior, but mostly later rivals. Assigning priority by fiat, he avoids having to deal with probability arguments regarding MS counts.


    This arbitrary method however does nothing to actually refute the reasonable presumption that, all other things being equal, the majority reading is most probably original.


    Hort himself knew the fallacy of Milligan's simplistic solution, for he insists, ​
    "But the occasional preservation of comparatively ancient texts in comparatively modern MSS forbids confident reliance on priority of [MS] date unsustained by other marks of excellence." (Intro. p. 31)​
    Hort further conceded that for singular readings, the majority reading certainly did hold the probability of being correct: ​
    "Where a minority consists of one document or hardly more, there is a valid presumption against the reading thus attested, because any one scribe is liable to err, whereas the fortuitous concurrence of a plurality of scribes in the same error is in most cases improbable;" (Ibid. p. 44)​
    Hort was certainly aware of the problem and power of a majority reading, and rather than dismiss it completely, he sought to severely limit its value. He spends many pages presenting hypothetical arguments in an attempt to minimize and/or eliminate the validity of majority readings (e.g Intro., pp.40-46). In order to override the weight of majority testimony, Hort in the main invokes the concept of genealogy. His methods and arguments have been critiqued elsewhere, so we won't go into them here.


    But the argument based on the majority of MSS actually is itself essentially a genealogical argument, something for the most part ignored in the literature.
    Here we will be free to explore both its strengths and weaknesses.


    peace
    Nazaroo

     
  3. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0

    ...Finishing off Hort

    Before moving into a proper discussion of the Majority Reading Probability Model, we would like to finish off our discussion of some of Hort's assertions in the previous post.
    Hort insisted that 'majority readings' were only valid when it came to singular readings (with only 1 or 2 witnesses in support), because only these could in his view be almost certainly identified as errors by the actual scribe of the surviving manuscript. But the line isn't anywhere near so clear and easy as this.


    (1) Many accidental omissions avoid detection because the text still makes sense, and the lost content isn't critical to the text. Dittography errors (accidental repetitions) by contrast are easy to spot, and quickly and easily repaired. As a result, omissions were copied repeatedly, since the most common error-correction was done against the very same master-copy with the errors.


    (2) Many accidental errors were copied because of lax error-correction, especially in early times, before standardized practices were developed. This helps explain why so many errors are very early.


    (3) Many errors would be mistaken for the correct text, and would invade other copying streams through cross-correction and mixture. As a result, often diverse copies can attest to rare minority readings.


    (4) Some omissions of key material would make that material appear to be a deliberate addition for doctrinal purposes, and cause correctors to prefer the omission.


    (5) Some areas of the text were prone to accidental error from stretches of similar words, giving independent copyists many opportunities to make the exact same errors:

    [​IMG] Click to Enlarge

    (6) Many minority readings would have originated as singular readings in previous copies, and there is no reason to treat scribes whose work is now known only through copies differently than scribes we can directly access. A large number of minority readings will have the same features and probable causes as singular readings, and to refuse to apply our knowledge of scribal habits to non-singular readings is not sensible.


    Accepting only singular readings is a good skeptical methodology when assessing both an individual scribe and gathering data on general scribal tendencies. But once knowledge of scribal tendencies can be generalized, it needs to be applied to all parts of the copying stream, including ancestors and lost exemplars behind surviving documents.


    Because of all these well-known factors, extreme minority readings cannot be ignored simply because they are not 'singular'. Variety and quantity of independent attestation to a reading still counts as an important factor in evaluating variants.




    Factors that Further Enhance the Probability Argument
    for Majority Readings

    Before we critique the Probability argument, it is important to look at other well-known and understood factors that uniformly increase the reliability of the majority reading.

    In the original model, we showed minimal manuscript reproduction. Each manuscript was only copied twice. In the Hodge's original illustrations, they actually used a reproduction rate of 3 copies per master. "each MS was copied three times, as in other generations..." (App. C, p. 162, footnote - online version).

    Both of these rates however are extremely low and unrealistic. In practice, it is almost certain that master-copies would usually be copied far more than just 2 or 3 times. A good master-copy might be used dozens, or even scores of times over many years, until worn out or destroyed:

    [​IMG]

    The result of actual practice will be a much bushier tree than the commonly seen binary branches of simplified models.

    Nonetheless, sparse trees with low reproduction rates can represent a "worst case" scenario to test the robustness of the model. Consider the following model tree, with a few enhancements (4 copy generations, 30 copies):

    [​IMG] Click to Enlarge

    Here we've chosen a start-rate of about 3 copies per master (2 generations), followed by a slow-down (3rd generation), slightly less than 3 per master, and finally 2 copies per master (4th generation). We have also allowed that some copies will be dead-ends, and not copied at all. This is a much more realistic picture of the probable beginnings of a copying run.

    Error Packets:

    Multiple errors are added to a large book when copied; however, we can treat these as a single "Error Packet" which will now be transported in bulk from copy to copy, once obvious errors are caught. This packet will infect all future copies down the line. Above, the Yellow Packet (2nd generation error) has passed to 8 copies (8/31 = 26%). The Red Packet, (3rd generation) has only spread to 3 copies (3/31 = 10%). These low percentages show good reliabilty in the percentage indicators, providing basic conditions have held (moderately close copying rates in each generation). A 4th generation error would drop to a 3% minority reading.

    Varying Copy-Rates:

    Even significantly retarding the copying rate in following generations has not affected the basic result. The early 'dead-end' copies in fact could be connected almost anywhere. A strict rate of 2 copies per master would have put them under the middle two (uncopied) 3rd generation copies. The white uninfected copies could be arranged in almost any independent manner, with the same result. This shows the robustness of the model even with varying copy-rates. A steadier copy-rate would have actually lowered the Yellow Packet score further to about 22% (a 4% loss in votes).

    In fact it is difficult to force the MS support of an Error Packet high enough to mislead. Most random fluctuations in the copying stream do not enhance MS counts for Error Packets, but lower them further. Since there are almost infinite combinations of such 'negative' events possible, and relatively few 'beneficial' variations that would cause a significantly high 'false positive', the odds are greatly against an Error Packet achieving a majority vote in a random, undirected process.
    Most often, even significant and very 'lucky' anomalies in the copying process will not affect the count enough to turn an Error Packet into a 'Majority Reading'. Thus not only will all negative and 'neutral' variations leave Error Packets with low scores, so will positive variations that don't score high enough. Most equally probable random variations then will leave Error Packets as minority readings.
    This is important, for it means that only a directed process, (e.g., a deliberately manipulated copying stream) could result in Error Packets becoming Majority readings.

    (to be continued...)
     
  4. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    Going Deeper into the
    Probability Argument

    Our simple copying tree can illustrate a lot of other interesting questions regarding the probability argument. One observation which has been bypassed so far in vague protests and discussions is exactly what kind of catastrophe could result in a false majority text, and what combination of features it would have to have.

    For instance, an obvious objection would be that the Majority model presumes that all manuscripts are actually available to be counted. In fact it does not require this at all. However the question of adequate sampling of the copying stream is a legitimate issue, and poor sampling would naturally be expected to skew results and their confidence factor as well.
    [​IMG]
    Taking our copying tree, it is reasonable to assume the earliest copies would gradually be lost, not just for counting, but also for collating.
    [​IMG] First two and a half generations lost... One immediate observation is that the loss of the earliest manuscripts will indeed benefit the % score of an Error Packet. Here the Yellow Packet now holds 8/25 or 32%, up from 26%, for a gain of 6%. The Red Packet however, goes from 3/31 (10%) to 3/25 (12%) and only gains about 2%. Not only is the payoff low, but such a moderate loss only significantly benefits the earliest minority readings, those with the highest initial percentage.

    How big a catastrophe is needed to flip a minority reading into a majority?

    [​IMG] Three and a half generations lost... The score is now 6/18 = 33% for Yellow (only +1%!), and 17% for Red (+7%). A minor surprise. What is happening is that now the earlier Error Packet is losing votes along with the original readings, while the Red Packet is still gaining in voting power, because none of its voters has been affected by the catastrophe. Yet it doesn't take much to see that no minority reading can really get much further ahead simply by the loss or destruction of earlier manuscripts.


    Textual Disasters:

    What we need is a REAL catastrophe. The good manuscripts need to be specifically targeted, and with ruthless efficiency. With those eliminated, at least some errors end up in a majority of surviving MSS. The Yellow Packet readings are now in 6 out of 9 MSS (66%) with a comfortable majority.
    [​IMG]
    But the Red Packet remains a reading-block with only minority support. What happened? Even though every good manuscript has been eliminated, the good readings in each of the remaining groups ensure that most readings, namely the later ones, are stuck with only minority support. Remember that these Error Packets are not directly competing, but are independent groups of errors in different areas of the text. Any overlap will be very small, and the chances of the scribes making the exact same errors are smaller still.
    Its clear that even the loss of the best early MSS alone cannot cause the dominance of any but a few of the very earliest errors. This means generally, that no amount of destruction of earlier manuscripts by itself could cause a minority text to become a majority text. That is, the errors in the manuscript will include early , middle, and late errors. All types of errors will be confined to this group, but not all can make it into a majority of surviving manuscripts. Some must remain minority readings, even though they uniquely characterize the text-type and may be exclusive to it.

    We need a very special kind of disaster, to pull off the kind of coup which is claimed for the Textus Receptus (TR, = Byzantine text-type). Remember that almost ALL the readings unique to this text-type are rejected by critics, and ALL are claimed to be 'late' (not existing before the text-type). Only a very small number of important readings are admitted to be ancient by critics, and these are said not to be unique or characteristic of the TR (or the Byzantine text).

    But this claim flies against the mechanics of transmission. If this small group of readings really were ancient, they would be majority readings and characteristic of the Byzantine (Majority) text-type, not mere peripheral readings. And if the bulk of the Byzantine readings really were late, they would mainly be minority readings within the text-type, and would not saturate every Byzantine manuscript.

    (to be continued...)

    Nazaroo
     
  5. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    Some people may think that the argument in favor of the Majority text is simply this: That errors, being introduced later in the stream, will almost always be stuck in the minority of manuscripts.

    This however, is not the actual argument at all. The possibility that a manuscript with a given error (or set) could be copied more often than manuscripts without the error(s), is actually a given. As Hodges notes:
    "...of course an error might easily be copied more often than the original in any particular instance." (Pickering, Identity..., Appendix C p 165). ​
    But the point is, this only works once. Errors can't accumulate gradually in such a manner. Lets see why. We start with Yellow Packet copies getting copied more often, and this gives as an initial false majority for the Errors introduced by the Yellow master-copy:
    [​IMG]

    Errors do indeed accumulate. In the above diagram, all manuscripts copied from the first copy with the Yellow Packet will have its errors. Further down, an Orange, Red, and Purple Error Packet are added. But the effect is obvious:
    White (Pure) - 10 / 25 = 40% minority reading (unfortunate)
    Yellow Packet - 15 / 25 = 60% majority - false ('lucky break')
    Orange Packet - 8 / 25 = 32% minority
    Red Packet - 3 / 25 = 12% minority
    Purple Packet - 1 / 25 = 4% minority
    It doesn't take a genius to see that again, the natural tendency pins down most subsequent errors as minority readings. This doesn't bode well for the Purple text. The very manuscripts that support the Yellow Packet readings testify strongly against the Purple Packet readings.


    (Secondly, although the 'White text' (pure text) as a unit is in the minority of MSS, its readings remain in 99% of cases perfectly safe, still vouchsafed by majority readings. )

    Even wiping out all earlier generations doesn't help. This only stabilizes the percentages for each group of readings, once normal or average copying is resumed:
    [​IMG]

    White Packet - 2 / 8 = 25% - minority (in present / future)
    Yellow Packet - 6 / 8 = 75% - majority - 1 false reading set

    Orange Packet - 4 / 8 = 50% - neutral / uncertain

    Red Packet - 2 / 8 = 25% - minority / true reading
    Purple Packet - 1 / 8 = 12% - minority / true reading
    Although this extreme case seems to undermine the reliability of majority readings, this simply isn't the case. Probabilities remain strongly in favor of Majority readings. Lets see why. We need to remember that only a very small number of early and frequently copied readings will have a false majority rating (e.g. Yellow Packet).
    The majority of errors in the extreme texts (e.g. the Purple Text) will have their reading-support all over the map, and with very few false-positives (e.g. Yellow); but the bulk of errors will remain graduated minority readings.
    Error Packets (and real errors) will still be identifiable because:
    (1) These minority readings will however, still be strongly associated with the most corrupted and generationally later texts (e.g. Orange, Red, Purple). ​
    (2) These texts will be easily identified, because (a) as texts or composite groups of error-packets they will remain minority texts. (b) The differently supported packets allow us to use genealogical methods.​
    Typically, opponents of the Majority of MSS Model will reason that a process of uneven copying could occur repeatedly, boosting minority readings into majority readings on a larger scale. Hodges showed the failure of this argument by showing that cumulatively speaking, the probabilities for multiple accidents favoring a bad text quickly skyrocket downward. In discussing the case of a second error (or error packet) in a following generation, Hodges has stated:
    "Now we have conceded that 'error A' is found in 4 copies while the true reading is only in 2. But when 'error B' is introduced [in the next generation], its rival is found in 5 copies. Will anyone suppose that 'error B' will have the same good fortune as 'error A', and be copied more than all the other branches combined?...but conceding this far less probable situation, ...will anyone believe the same for 'error C'? ...the probability is drastically reduced. " (Append. p. 167)​
    These 'errors' would be equivalent to our Yellow, Orange, Red Packets respectively.

    Compounding Unlikely Events: Rapidly Decreasing Probability

    We allowed for one catastrophe: over-copying of the Yellow Packet. Hodges' argument here is actually so powerful, its clinching:


    Probabilities are calculated by multiplication, with the probability of each event represented by a fraction less than 1. A 50% chance of an error being over-copied (as an example) means 1/2 the time it could happen. But the second error also being over-copied at the same time is (1/2) times (1/2), = 1/4, only a 25% chance. The chance of three equally probable events in a row happening is 12.5%. This is the same as flipping a coin. For a fair and random coin-toss, the chances of tossing 7 'heads' in a row is less than 1%!:

    Likewise, even with 50/50 odds, seven generations of errors have almost no chance of ever being consistently copied more often than their rival readings in a random undirected process.

    But our observations here go far beyond even this argument: Its a case of the experiment being poisoned before it can even get off the ground.


    The Defocussing Effect of Noise on Transmission

    What is not being mentioned so far in any of the discussions is the fact that ALL scribes introduce errors, in every single copy. Contrary to intuition, this actually also assists the Majority Reading Model, by sabotaging false positives further.

    The scheme above isolates four Error-Packets for discussion, and the analysis is valid because they are 'independent' in the sense that normally errors won't overlap or interfere with each other in the early transmission. Its like a needle in a haystack: The chances of two errors bumping into each other is nearly zero.

    But with errors being added randomly and on average roughly equally with each copy, we have now introduced random noise into the signal at all points. This random noise acts to mask the false signals as effectively as the true signals.

    One can think of injected random noise as a 'damping factor': A bell rings clearly and long in the air. But a mute, or mask attenuates both the loudness of the bell and the duration of the note. Imbalances (spikes and dips) in the transmission process are softened, evened out and muted in a variety of ways, randomly. This impedes the efficiency of transmission; the clarity, and the duration of false signals (errors) as well as true ones are attenuated.

    However, the true signals have an enormous starting-advantage: They are already 100% Majority readings, and it takes a lot of accumulated noise in the transmission to disturb the original text enough to actually supplant a majority reading. These are modern considerations now well analyzed by engineers, but which were unknown to 19th century textual critics relying on 'common sense' or intuitive guesses.
    Although both true and false signals are attenuated and masked by noise, the much smaller error signals suffer the most relative damage from further random noise. Anomalies in the error transmission are smoothed, masked, and truncated by random processes, which defocuss unique and unusual signals in the mix.

    peace
    Nazaroo
     
  6. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    In the last post, we looked at the exploding growth of unlikelyhood of a sequence of individually unlikely events. Specifically, in a copying series, we considered Error-Packets added generation by generation.

    We discovered that even though the Error-Packets were 'independent' in some sense, the best-case scenario would be like a series of independent coin-tosses. The chances of an unlucky circumstance falsely favoring a minority reading more than a few times in a row was progressively more and more unlikely.

    Mutually Exclusive Events are Impossible, not Improbable!

    But now we are going to look more closely at the situation, and discover something far more fatal to the theory of a build-up of minority readings:
    Accumulated groups of readings cannot occupy majority positions.

    Consider the following diagram, much more realistic, but also potentially dangerously favoring minority readings:

    [​IMG]
    The first Error-Packet A introduced in first-generation copy # 10 here, is multiplied, because it is chosen as a master-copy. For our purposes, we may allow that most other first-generation copies (#0 - 255) are simply destroyed by the Romans. Now the Error-Packet is found in an undisputed majority (80% or more) of manuscripts.

    But now by definition and premise, copy # 10 must also be multiplied greatly, and its copies must stay in the copy-stream and be copied themselves, perpetually and in high numbers. This is exactly what will allow Error Packet A to continue holding its majority-reading position. If those too are destroyed, they were copied for nothing, and Error Packet A effectively drops off the face of the earth, while copies without it carry on.

    But now consider Error-Packet B, in second-generation copy # 1: We want it also to become a Majority Reading. But this is impossible, without destroying most other copies made from copy #10. That is, if we again use the same trick, and multiply copies of manuscript #1 to beef up its readings down the line, and destroy the competing lines from copy #10, we have actually contradicted ourselves. Because the whole purpose of multiplying copies of copy #10 was to provide a high manuscript count, by keeping them in the copying stream and having them continually multiply in excess of all others.
    In order to boost Error-Packet B, we have to abandon boosting other copies of Error-Packet A. We want to boost both Error-Packets, so we can only boost copies of second generation copy #1, which contains both Error-Packets.
    But this means all the extra copies of earlier generations in this line must be suppressed: either not copied, or else destroyed. The net effect of this strategy will indeed guarantee that each error will be a majority reading, and all copies will support all Error-Packets equally. But now the fans of copies from each previous generation are erased, and we are only allowed one copy in each generation!
    [​IMG] Errors Accumulate in a sequential series, not a branching stream In order to keep each new error in a majority position, we have to prevent all fanning of generations. Only the key stream can be perpetuated, and only the final copy can be multiplied. Early branching is simply not allowed in significant numbers.

    Even here however, most errors can be identified and removed, without comparing manuscripts to independent lines, by the manuscript count! Early errors will be majority readings, but most errors, and especially later errors, will be minority readings.

    It is trivially true that any copy down the line will have accumulated errors from multiple generations. And it is also trivially true that only copies along this line will have all the errors we are accumulating. But it is also true that even now, even with a completely pruned genealogical tree, we still can't get evenly distributed errors as majority readings. The later errors will simply not be present in the earlier copies. The only genealogical tree which allows the majority of errors to become majority readings is as follows:

    [​IMG]


    This scenario is the only 'catastrophe' that can possibly generate a large number of errors as false majority readings, and only those errors in the copying line can become majority readings. Two simultaneous events must occur:

    (1) Most previous copies must be destroyed, to remove good readings.

    (2) Copies must be mass-produced only at the final stage of transmission.

    This is what the modern critical model is really proposing.


    peace
    Nazaroo
     
  7. Winman

    Winman
    Expand Collapse
    Banned

    Joined:
    Jul 8, 2009
    Messages:
    14,768
    Likes Received:
    0
    Great article, I have posted this in Bible Versions/Translations.

    Thanks!
     
  8. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    And thank you for the thank-you.
    Its so rare these days I was surprised!
    Thank you for posting the article in BV/Trans,
    as I'm not really supposed to post there.
    I'll read what anyone wants to add though:

    peace
    Nazaroo
     
  9. Gerhard Ebersoehn

    Gerhard Ebersoehn
    Expand Collapse
    Active Member

    Joined:
    Jul 31, 2004
    Messages:
    8,870
    Likes Received:
    3
    GE:

    But we since ‘modern man’ has begun to produce ‘translations’ as, or for ‘the Text’, and with ever improving technology has already reached the point where he controls, and is able to identify and expel or increase and perpetuate ‘error’, no longer have to do with increase in ‘errors’ in actual manuscripts. It stopped where manuscripts stopped, and error became systematic conspiration as religious ideology gained control of technology. That is where we the ochlocracy are today as far as our ‘Received Text’ is concerned— completely at the mercy of the Priesthood of Institutional capitalistic autocracy— The Sacred Mass Media.

    Yes, let me edit it to say what I wanted to say in the first place : completely at the mercy of the Priesthood of Institutional capitalistic autocracy— The Sacred Mass Media MAFIA.

     
    #9 Gerhard Ebersoehn, Apr 24, 2011
    Last edited by a moderator: Apr 24, 2011
  10. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    The 'Catastrophe' Model

    Now that we have the only viable genealogical stemma for a "catastrophe", it still isn't enough: All the stemma does is provide an opportunity for a disaster to take place.


    [​IMG]
    Its not a disaster by itself. For we have presumed ordinary copying at every step. Some copyists will make more mistakes than others, and some copies will be better proof-read than others. But these variations do not create any kind of catastrophe. The text will accumulate normal errors generation by generation, and the errors will be mostly among minority readings in the early copies.

    In fact, if the copyists have done an honest job, the final copy will be as good or better than any other late copy we might have chosen as a master-copy for future copies.

    It won't be until the final 'flowering' and rapid expansion of the last copy that we'll have the particular errors of this copy-line become a permanent feature of our 'majority text'.

    But that's no real problem at all. When we examine the text recreated from this final exemplar, it will only be slightly different in flavor from a text based on some other copy, or even a large group of other copies. The overall error count will not be significantly higher from a practical point of view.


    Planning a Disaster

    For our disaster we still need one more thing: A massive alteration of the text all at one sitting is needed as it were, so that the new false readings become majority-readings. The most likely scenario will have this happening all at one time, since it is a rare and unusual event.

    Since errors can only be injected in packets, the opportunity for a real disaster only occurs once per generation in this 'catastrophe' model. In the example above, there are only three chances: Copy #10 (1st gen), copy #1 (2nd gen), and copy #253 (3rd gen). Once mass-copying begins, the opportunity is gone.

    This is exactly what the modern critics who follow Hort and the text of Aleph/B propose. Hort claimed,
    "An authoritative Revision at Antioch..was itself subjected to a second authoritative Revision carrying out more completely the purpose of the first. At what date between 250 and 350 the first process took place is impossible to say...the final process was apparently completed by 350 A.D. or thereabouts." (Intro, p.137)​
    Hort tentatively suggested Lucian (c 300 A.D.) as the leader and some scholars subsequently became dogmatic about it.
    Thiessen claimed:
    "...the Peshitta [Syriac] is now identified as the Byzantine text, which almost certainly goes back to the revision made by Lucian of Antioch about 300 A.D." (H. C. Thiessen, Introduction to the NT, (Eerd. 1955) p. 54-55. ​
    All that we really know of Lucian was provided by Eusebius (c. 310 A.D.), and Jerome (c. 380 A.D.). But the picture painted by this evidence is quite different from that proposed by modern critics. Eusebius praises Lucian as virtuous, and Jerome later calls him talented; but James Snapp Jr. explains:
    "...in his Preface to the Gospels, Jerome had described the manuscripts which are associated with the names of Lucian and Hesychius without any sign of admiration. Specifically, Jerome had written:​
    "It is obvious that these writers [Lucian and Hesychius] could not emend anything in the OT after the labors of the Seventy [i.e., they could not improve upon the LXX]; and it was useless to correct the NT, for versions of Scripture already exist in the languages of many nations which show that their additions are false." ​

    Jerome suggests at least three popular 'revisions', each however being a regional text favored at its own major city-center. Again James notes:
    "Notice the setting of Jerome's comments. Jerome was, in 383, making a case for the superiority of the text-base which he has used as the basis of his revision of the Gospels. He had, he explained, supplemented [and corrected] the wildly-varying Latin copies by appealing to ancient Greek MSS, and he noted that he did not rely on MSS associated with Lucian and Hesychius. This implies that there were, at the time Jerome wrote, copies of the Gospels which were associated with Lucians name. ​
    Jerome does not here deny consulting Origen's copies of the NT. But it is known he went to Constantinople to use the oldest and best Greek copies there. He has here specifically stated that he avoided using Lucian or Hesychius prefering older copies.
    In Jerome's Introduction to Chronicles, he mentioned three popular forms of the Greek OT text: ​
    "Alexandria and Egypt in their LXX [copies] praise Hesychius as author; Constantinople to Antioch approves the copies of Lucian the martyr; the middle provinces between them read the Palestinian books edited by Origen, which Eusebius and Pamphilus published."
    Also, addressing variants in Psalms, Jerome stated in his Epistle to Sunnias and Fretela (c. 403), ​
    "You must know that there is one edition which Origen and Eusebius and all the Greek commentators call koine, that is common and widespread, and is by most people now called Lucianic; and there is another text, that of the LXX, which is found in the MSS of [Origen's] Hexapla, and has been faithfully translated by us into Latin."​

    Here Jerome clearly indicates that for the OT, he has avoided the Koine/Lucianic text, and used the text-critical work of Origen instead.

    The historical data tells us two things:

    (1) There was no wholesale destruction of MSS or competing texts. At the time of Jerome's Vulgate (400), at least three major text-types were readily available, each being used and copied over wide regions and distributed from independent centers. Additionally, Jerome was able to travel to centers like Constantinople to access even older copies, predating the 'recensions' known to him around 400 A.D. Those manuscripts would have been older than Origen's copies (c. 250), Lucian's (c. 300), or Hesychius (c. 250-300).

    (2) Official Recensions were not readily accepted, and did not displace current texts. Jerome's Latin Vulgate (a new translation of the Greek into Latin), meant to replace the Old Latin copies which were too varied, was strongly opposed regarding his attempt to conform the OT with the Hebrew text. It was finally adopted after many readings Jerome had introduced had been restored back to the traditional text!

    (3) The Latin Vulgate conforms strongly to the Byzantine text-type, sharing most readings. This tells us that the ancient manuscripts Jerome consulted must have had the Byzantine text. Jerome thought this was older than both the Lucianic and Hesychian recensions, and avoided those. This can only imply that the Lucianic Recension cannot be the Byzantine Text-type, or its source. Jerome was able to distinguish it quite plainly from the Byzantine, which he adopted.

    (4) The NT Takeover by the Lucianic Recension simply did not take place. The association of the Lucianic text with the 'Koine' is in reference to the Old Testament versus Origen's version of the LXX.

    (5) The conditions for a 'Catastrophe' of the type proposed by Textual Critics did not exist, and no such drastic alteration to the text could have happened. The Byzantine text is probably the result rather of a 'normal' transmission process.

    (to be continued...)

    Nazaroo
     
  11. billwald

    billwald
    Expand Collapse
    Banned

    Joined:
    Jun 28, 2000
    Messages:
    11,414
    Likes Received:
    0
    You invent this stuff on your own or copy it out of a book?
     
  12. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    I quote authors and write my own articles.
    I often make my own diagrams, but many
    have been made by Mr. Scrivener.
     
  13. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    Majority Text (VIII): Cross-Pollenation - Correction and Mixture


    It has been claimed in the past that the problem of "mixture", (the correction of manuscripts and copying of readings across genealogical lines) negates or destroys any genealogical arguments and claims.

    This is simply not true, and shows a poor understanding of the real effects of such activity. Consider first of all, the simple act of double-checking, proof-reading a copy, against its own master-copy. This action will very rarely introduce further errors, but most often and quite frequently will simply correct copying mistakes from the 'first pass'. The effect of error-checking and correction is quite predictable: The rate of accumulation of errors is drastically reduced.

    Error-correction has the main effect of severely retarding any corruption over copying generations, and greatly extending the staying-power of the original readings; error-checking always increases the percentage score of any and every majority reading (i.e., correct reading).

    What happens, however with true "mixture", where readings cross into parallel transmission lines? The answer is similar, but has an added complexity:

    [​IMG]
    Transmission Model with MIXTURE (click to enlarge)

    "Mixture" occurs when a manuscript is corrected from some other copy not involved in or descended from its own transmission branch. This happens just as in ordinary correction, but now, readings not found in the master-copy can enter into the manuscript and continue in the copying stream.

    In the above diagram, blue lines indicate "successful" corrections, that is, cases where the corrector was himself correct in making the change. Red lines show places where an incorrect reading was copied into a manuscript that originally had the correct one. We have allowed that good corrections will occur slightly more often than bad ones, which is a reasonable expectation.

    On the top-right, a copy containing the Green Packet gets corrected from a very old copy, and has its "Green" readings restored (it now becomes white). This is one of the most likely scenarios, since early copyists will naturally assume older manuscripts are more accurate (just like modern critics do!). As a result, the "Green Packet" loses many votes that would have accumulated from this copy. The errors become even smaller minority readings. Now we allow a Yellow Packet to be 'corrected' by a faulty Green Packet, which now carries both Yellow and Green errors. This will not compensate for the loss of an earlier Green Packet, because it comes later. It fathers instead a peculiar minority 'text-type' or family with mixed readings.

    Now on the left side, an early mistake is made: a Yellow Packet copy is 'corrected' by an Orange Packet copy, resulting in a boost of Orange Packet readings. The Yellow Packet readings are unaffected. Even if this corrupted copy is recopied twice more (not shown), The Orange Packet manuscripts will only amount to 10 copies out of 26 (38% up 5% from 33%), staying minority readings.

    Correcting a Red Packet copy using an Orange Packet copy does nothing for Orange readings however! In this case, the Red Packet readings decrease, but the Orange readings were already in this copy, so there are no gains. Its only the Red readings that get corrected. Since this is more likely than not (similar copies will be in similar geographic regions), minority readings will lose out more than half the time. In this case, "Mixture" has only purified the transmission stream, and this is actually the most common scenario, even when correcting from diverse copies.

    Again, when an Orange Packet copy is corrected by a Yellow Packet, the only net result is purification of the copying stream. The errors in the Yellow Packet are already present in the Orange copy and no correction is made there. Only Orange readings are removed. It is perfectly reasonable and effective to correct a copy from another copy with errors. The average result will not be any increase in errors, but usually only an exchange, with as many Error Packets getting corrected as there are Error Packets getting perpetuated.

    The error-count within an Error-Packet is not relevant here (i.e., the 'size' of the Error Packet). Of course Error Packets can be of different sizes and degrees of seriousness. But they can only be transmitted manuscript to manuscript in groups, and each act of copying a manuscript must be treated as a single discrete event. We cannot switch back and forth between Error-Packets and errors within a packet indiscriminately, as this would violate proper analysis of the error transmission process.

    Again as in the non-Mixture model, varying copying rates only moderately affect minority readings, mostly in a random fashion and not with the consistency needed to cause minority readings to become majority readings.

    (to be continued...)

    Nazaroo
     
  14. billwald

    billwald
    Expand Collapse
    Banned

    Joined:
    Jun 28, 2000
    Messages:
    11,414
    Likes Received:
    0
    You argument should also apply to all the texts, the oldest being the most accuate.
     
  15. DHK

    DHK
    Expand Collapse
    <b>Moderator</b>
    Moderator

    Joined:
    Jul 13, 2000
    Messages:
    37,982
    Likes Received:
    134
    But how would you know that? Maybe the oldest is the most inaccurate, the most contaminated, and thus the oldest. The most accurate of course, used by the churches, and when worn out, discarded. The books I use the most are tattered and torn. The few books that I have that I rarely use but are only used for reference (like the Koran, Book of Mormon, etc.) are nicely preserved.
     
  16. rbell

    rbell
    Expand Collapse
    Active Member

    Joined:
    Jan 16, 2006
    Messages:
    11,103
    Likes Received:
    0
    This is what happens when a KJVO discovers "CTRL+C"

    It's a dangerous thing.
     
  17. DHK

    DHK
    Expand Collapse
    <b>Moderator</b>
    Moderator

    Joined:
    Jul 13, 2000
    Messages:
    37,982
    Likes Received:
    134
    Who's the KJVO?
     
  18. billwald

    billwald
    Expand Collapse
    Banned

    Joined:
    Jun 28, 2000
    Messages:
    11,414
    Likes Received:
    0
    The oldest having the best odds of being correct?
     
  19. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    This would only apply if we were talking about texts contained in manuscripts, not manuscripts themselves.

    A manuscript can have a text of almost any age
    .
    It could be a direct copy of a 2nd or 3rd century text,
    while a 4th century manuscript could be 20th generation copy of the original.
    If the quality of copying were approximately average for each generation,
    the 4th century text would be 20 times as corrupt as the 1 or 2 generation copy.

    As it turns out, many relatively new copies (7th to 14th century) are copies of very old texts, possibly 3rd or 4th century Uncials.

    Likewise, copies like Codex Sinaiticus are actually very degraded copies made from long copy-lines of 5 to 10 generations, even before getting back to copies from the 2nd / 3rd century which were copied in separate books (i.e., there was a stage where Paul's letters were copied on one manuscript, while the 4 gospels were copied on another. Before that, the gospels circulated separately.)

    The best way to protect the text (even from modern editors) is to take the average majority readings from all manuscripts.
     
    #19 Nazaroo, May 10, 2011
    Last edited by a moderator: May 10, 2011
  20. Nazaroo

    Nazaroo
    Expand Collapse
    New Member

    Joined:
    Oct 29, 2007
    Messages:
    417
    Likes Received:
    0
    Majority Text: (IX): Analyzing Terms and Claims


    When the Probability Arguments in favor of the majority readings were first described in detail by Hodges (Pickering, Identity of the NT Text, Appendix C), they were attacked by D. A. Carson and others, who essentially abandoned any precise Divine Preservation of the NT text.

    [​IMG]


    What is 'Normal' Transmission?

    Carson adopted the 19th cent. materialist/rationalist view, that there was nothing miraculous in the textual transmission process: there was no special 'Divine control' over copying; - i.e., no supervision, influence or interference by God to protect the exact wording throughout the ages.

    Because there was nothing immediately detected in the copying process to distinguish it on a supernatural basis, 19th century critics were convinced there was no such influence. God was an 'unnecessary hypothesis' for a "scientific" description of textual transmission. For these investigators, the existence of copying errors and corrections in all manuscripts was taken as evidence against any supernatural intervention. The copyists were on their own.


    Materialism Remains Unproven

    The anti-supernatural attitude was prevalent throughout the late 19th and early 20th century. But in spite of the failure of scientific methods to detect non-material effects, the question of supernaturalism vs. materialism has proven to be a most difficult if not insoluble philosophical problem. The caution is this: just because something is not obvious, observable or easy to detect doesn't mean it has no existance. The same 19th century skepticism would have also rejected radio communication, and atomic bombs. Finally, anti-supernaturalism itself has no place in Christian faith systems. Belief in an invisible God who intervenes in history is fundamental to both Christianity and Judaism.


    The Meaning of 'Normal' in the Probability Model

    While textual critics have used the word 'normal' in the sense described above, we must note that it has an entirely different meaning in discussions of the Majority Text Probability Argument: In this context, 'normal' just means an average process, following a predictable pattern with expected results. 'abnormal' would not mean 'supernatural', but rather it would be used to describe any unusual process or anomaly which resulted in an unexpected outcome.

    The Probability Model does not address the question of 'supernatural' vs. 'materialism' . It is not concerned with causes at all. It is strictly a descriptive model that makes only basic mechanical assumptions about the process, such as the limits of time-direction, the consequences of ordinary transcriptional probabilities, and the effects of processes on statistical results. As such, the Probability Model is not a 'supernatural' theory, and it makes the same assumptions about the ordinary world that every other scientific model does. For purposes of analysis, the Probability Model assumes that errors are 'random' undirected events, just as other scientific models would. But "undirected" here simply means that a process is not under control of a person or cause which would unnaturally skew ordinary physical events. So this model is not any kind of argument in favor of supernaturalism: instead it allows the same variety of world-views that other models do.

    Because of this approach, the Probability Model cannot offer direct 'proof' of God's providence or Divine Preservation. It can only offer objective evidence which proponents of such philosophical positions can find either compatible or incompatible with their system of philosophy. So it is not the responsibility of proponents of the Probability Model to defend supernaturalism, or even interpret its findings in the light of various world-views. That must be left to others, theologians and philosophers, and investigators of the supernatural.

    What the Probability Model can do, is offer a coherent and rational description of the copying process, and from this, evaluate various text-types in a history of textual transmission and also assist in the reconstruction of original text(s).

    (to be continued...)
     

Share This Page

Loading...