Talk:Law School Admission Test/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Complete re-write of criticism section

As promised, I have finished a complete re-write of the criticism section which basically includes removing any and all criticism from it. I left the much-contested "16% predictability" statistic in there. I would be happy to provide a justification for it if anybody would like to see it, including a point-by-point rebuttal to any points that JKillah has made here that I have not answered to his satisfaction. Bjsiders 22:28, 14 March 2006 (UTC)

Grutter v. Bollinger

BJSiders continues to claim that there is "expert testimony" in the Gutter v. Bollinger case which claims that the LSAT scores only predict 16% of law school grades. Two of his links are to general information about the case and not related at all, and the other are copies of the case from findlaw. The expert in question is Jay Rosner president of the Princeton Review Foundation. In his testimony Mr. Rosner gives no sources for his own statistical facts, and in Judges Friedman's opinion on the case he states (in the same document provided by BJSiders):

"The court is unable to give any weight to this testimony, as Mr. Rosner is not an expert in test design. Nor did he claim to have studied, or to have the expertise to study, the issue of bias in standardized tests." [1], page 79.

Thus, I think the statistics need to be removed, or it needs to be added that Mr. Rosner was not considered an expert, and Judge Friedman did not consider his testimony valid. JKillah 19:19, 9 March 2006 (UTC)

There is "expert" testimony, it's well documented. As I have said before in these talk pages, you can attack the validity or "expertness" of the testimony, which is what you've done here. What you shouldn't do is assert that it doesn't exist, which is what you were implying previously by demanding additional sources and erroneously calling it a "study." You should also not remove all references to it and replace it with data that looks to suggest the opposite, as you did in this edit [2], and then come onto the talk page and accuse me of replacing sourced data with something else, and THEN deny that you ever did this, all of which you've done on here. You should also not characterize a .4 correlation as "strong" when wikipedia's own article says it's the lowest possible correlation that can be considered a correlation at all, and then later say wikipedia is wrong and .4 is "medium". You're really stretching the limits of my capacity to assume good faith. I am more than happy to include your information as well, and I will edit the page to that affect, correctingly only your characterization of the correlation. Every reference I can find says that .4 is, at best, at the high end of weak. A .4 correlation coefficient means that 16% of the variance is shared between the two data sets. SURPRISE! 16% is EXACTLY the figure quoted in the expert testimony with which you so strongly disagree. What does 16% come from? I won't go into details on statistics, but [3] that might be a good place to start. Your statistics match up EXACTLY with the Bollinger case testimony. Again, you are wrong, .4 is NOT a strong correlation, your data matches up exactly with what the article already says, and you're trying to assert a 16% covariance is "strong" when any decent statistical textbook can tell you that it's not. If you don't like my sources, I'll remove all of it and just use yours, since it gives the exact same numbers. I consider this matter settled. Bjsiders


I think you misunderstand. R-squared values are used to predict effect in cause/effect variability, which doesn't apply in this situation.
You are right, .4 is not a strong correlation, but we can't include your statistics in this page because they are not "well documented" despite your claims. Show me one link where this 16% information comes from (that does not contain a caveat from a judge saying this information is not reliable) and we can include it on the page and take the NPOV tag off.
It is inaccurate for you to go through this page and derive the r-squared variable from LSAC's statistics and then seem to make it appear as though they use this r-squared value in their own analysis. Since we don't know their P-value, or exactly what they mean by "law school performance"(as in, to class rank, to actual letter grade, to a percentile range similar to their LSAT score), we cannot say how much this can be used to "predict grades."
This mis-reading of LSAC's data is why the statistics you present are incorrect, and why those same numbers you cling to are often passed around the Internet as a rumor. I'm not at all surprised that the 16% number matches up well with LSACs data, I can see clearly that this is where they were misinterpretted. We have no data to say that the LSAT predicts 16% of grades, it has a .4 correlation with law school performance, beyond that we cannot say without personal speculation.
If you personally hate the LSAT, or think it is an invalid test, perhaps you should do some research and contribute to a page on standardized testing and the controversy over them. Bullying anyone who tries to make this page accurate until they give into your own agenda is not the way to be a productive contributor to wikipedia. We should strive to provide accurate, unbiased knowledge, not doctor statistics so they say what you want them to.
If this continues to be a problem, we should list this page for mediation. JKillah 21:16, 9 March 2006 (UTC)
Way to assume good faith. I don't hate the LSAT. I took, did excellent on it, and got into the law school that I wanted to get into. You said above 'You are right, .4 is not a strong correlation'. Then why did you edit the article and say that it IS a strong correlation and then turn around and accuse me of pushing an agenda when I protested? Why remove the existing sourced information and replace it with your .4 that you mischaracterize as "strong" and then claim you didn't do that? All while accusing me of bullying and pushing a POV because I hate the LSAT? I think the information from the Bollinger case belongs in the article, and I protested when you removed it. Bjsiders 00:10, 10 March 2006 (UTC)


How you did on the LSAT or what law school you attend is irrelevant to me and to this discusion. As I said before, I meant that it was "strong" in relation to GPA. If you disagree with that or think I phrased it incorrectly, I am open to your editting it. However, you dodge the main NPOV problem: you cannot interpret LSACs statistics as you have tried to do. Your deriving a predicitve value for grades from their R-value is factually inaccurate and compromises the page.JKillah 16:15, 10 March 2006 (UTC)
So is whether or not I "personally hate the LSAT" and you felt compelled to bring that topic into the argument. I do think you mischaracterized what the correlation coefficient represents. Further, this wearisome tactic of accusing me of doing exactly what you are doing (e.g., replacing sourced sections with one's own preferred data, bringing personal matters into the discussion) needs to stop. We both feel that we're approaching this from a reasonably objective and statistically neutral viewpoint, and we're even using the same data. I feel that you are not looking over the material I've provided very thoroughly, as you keep bringing up problems with it that aren't correct (see below) and are demonstrating that you don't understand what the correlation coefficients and covariances represent. Bjsiders 17:20, 10 March 2006 (UTC)
Just to once again clearly state the problem that needs to be corrected: LSAC provdes statistical information of a +.4 Pearson coefficient that relates LSAT score to "law school performance." As "law school performance" is not defined, using the R-squared to say the LSAT predicts 16% of grades is inaccurate (not to mention, any test that 16% of the time could predict a student's exact grades over three years would be a remarkable test). We don't know how LSAC deteremined this coefficient, and thus cannot derive a covariance from it.JKillah 16:22, 10 March 2006 (UTC)
I disagree. LSAC clearly identifies "academic performance" as a correlation between LSAT score and grades in the first year of law school only. This made completely unambiguous in the material that I originally cited. LSAC feels that the correlation of LSAT alone (.41) is statistically significant, and strong enough to justify the use of the LSAT in law school admissions. I agree, and the section reflects this finding. LSAC further asserts that the combination of LSAT score plus UGPA is a stronger measure (.49), and should be preferred. This is also reflected in the section, and in law school admission practices in general. I feel that all of this is accurately and neutrally presented in the data and the analysis, and that it is all sourced properly in the section. You are mistaken here in your suggestion that the 16% reflects a prediction of 3 years of grades and is thus impressive. It does not, LSAC has not and does not claim that it measures 3 years of grading, and this is also made very clear in the section. The document that contains this information is 'New Models to Assure Diversity, Fairness, and Appropriate Test Use in Law School Admissions', which has been included in the section from the beginning, and contains the correlation data that you wished to add to the article.
LSAC (and you) feel that a .41 correlation coefficient is "strong". I (and most statisticians) disagree. It is strong only in the sense that no stronger measure for this particular application is known. The article reflects this assertion as well. If you'd like, I will rewrite the section entirely and quote generously from LSAC's materials, and exclude the Bollinger case entirely, since it contains the same data quoted from a witness that you feel is discredited. I'm unclear as to why you think that data quoted by that witness should be disregarded, but the same data presented by another source is valid. In my opinion, data is data, and it is the analysis of it that matters. I don't have time at the moment to undertake this re-writing, but I will attempt it this afternoon and post it in this discussion page before altering the main article. A few things need to be made clear in any such rewriting:
First, that the correlation coefficient represents the correlation between LSAT score and first year law school grades. NOT performance as an attorney, not class rank, and not 2nd or 3rd year grades. This is made very clear in the original correlation study. Second, the data covariance should be highlighted, and a brief explanation of what it represents should be included. By this I mean that the claim that "16% of first year grades may be predicted by the LSAT" is statistically correct and accurate based on the information provided by LSAC. If you disagree, please explain the basis upon which you disagree. The basis you gave above is utterly invalid and based on ignorance of what the data represents and an incomplete reading or understanding of the source material. Third, that the corellation is statistically significant, but not particularily strong from a statistical point of view. Fourth, that LSAT correlation is higher than UGPA correlation, and LSAT + UGPA is higher than LSAT correlation alone, and that LSAT + UGPA is the best known predictor of first year law school performance (by which we mean "grades"). Fifth, that due to the latter, most law schools currently use an admission index that includes both measures weighted in a manner determined by the individual school. Finally, that the motivation for use of the LSAT may be that there is no better predictor known and the ABA requires an examination for accredited law programs, so law admission offices probably feel compelled to rely on it, regardless of how good (or bad) it may be. It's still the best thing we've got.
Again, I feel that all of this was already made very clear in the article, and presented objectively. I'll attempt a re-write to make it more objective this afternoon and post the new proposed section here. I also think the section's name was poor, and that something like, "Statistical Analysis of the LSAT" might be more appropriate. I understand that you have objections to including some of this material, but I am still unclear on why. Bjsiders 17:20, 10 March 2006 (UTC)
If you dont want to leave the LSAT stastics as provided by LSAC in their conclusion, and you want to interpret them, you must provide more qualifications to the study done by LSAC. Your intreptations are assertions that misrepresent the data. Based on the informaion provided by LSAC, [4] this is the paragraph I propose.:
"LSAC reports a selected correlation coefficient of the predictive validity of the LSAT to predit first year average grades at +.4 (and thus with a co-variance of 16%). This calculation takes into account people who do not enroll in law school but take the LSAT. If those not admitted to ABA-accredited law schools are omitted, the correlation coefficient between LSAT score and first year grade average is +.62 (approx 38.4% covariance) [This information appears on page 7 and again on page 16 of the LSAC report]. Additionally, when schools are grouped by average undergraduate GPA of matriculating students and LSAT, the validity increases [page 16]."
I think that is a clear and whole representation of the facts. I have avoided using any term such as "strong" or "weak" to characterize the correlation, nor (other than adding the covariances as you have asked) have I made any interpretation of the data.
This paragraph also demonstrates how this interpretation of LSACS data is incorrect: "LSAC's correlation studies, for example, suggest that about 10% of students with a 140 LSAT score and about 25% of students with a 150 LSAT score will earn higher first-year grades than half of the students with a 160 LSAT score." This statement is incorrect, or at least not supported by the data [the homogenity of each individual school being controled for marginalizes this effect, see page 16] and I think need to be omitted. JKillah 18:23, 10 March 2006 (UTC)
That statement is LSAC's own interpretation of the data, and is contained in that same report. If you want to amend it to include a cautionary note about what it really means, I'm all in favor. Removing it entirely seems unjustified.
Further, you assert above in your rewriting that the data includes LSAT scores from students who never went to law school. Can you briefly explain how that works? I am, admittedly, not a statistician, but I have been operating under the assumption that if you never went to law school, you can't be included a correlation study that compares your LSAT score to your non-existant law school grades. I'd like to see your source for this "qualification" about the nature of the data.
Finally, why are you so adamantly demanding that I produce sources and back up every last shred of data, and demanding qualifiers and other caveats to this information, when you were perfectly compacent to replace all of this data with a direct quote from a source no more authentic than an on-line informational pamphlet published by Boston College that contained no source data, no references, and no biblography of any kind to justify their information? [5] Your editorial standards are pretty relaxed when it came to this data. You challenged my sources on the grounds that the sources merely assert that their numbers are valid without backing it up with references. Isn't that what your source did?
I feel that you are holding me up to a standard that you yourself made no effort to satisfy, all under the guise of insisting on a NPOV. First you removed sourced data and replaced it something else, then accused me of doing that, and when I called you on it, you said you didn't do it. Then you claimed the Bollinger case was a University of Michigan study. It's not. Then you suggested the expert testimony didn't exist. It does. You included qualitative words like "strong" to (in your effort to fix the NPOV problem) describe a correlation coefficient that most staticians agree is weak. When I proved you wrong, you said my sources were wrong, but later agreed that the correlation wasn't strong and suddenly you're in favor of there being NO qualitative words for that same reason. Then you said we don't know how LSAC measured "law school performance" so we can't include the covariance information. I have shown that "law school performance" is clearly evaluated using first year grades, and now you'll accept the covariance data as long as we don't try to explain what it means. You accused me of personally hating the LSAT and drug a personal issue into this, only to later dismiss me for responding and insisting that personal matters have nothing to do with the discussion. Now you're challenging another quote on the grounds that the data doesn't support it, when the quote is from the same analysis on which your Boston College information is based.
You have made claim after claim after claim about this topic, all of which is demonstrable fallicious, and based on the same source data as the article's existing information. This tactic of making false assertions, having me rebut them, and then scrambling around to find something else to pick on and insisting on additinoal onerous editorial standards for my contributions is becoming tiresome, and if it continues, I will agree to your suggestion that mediation is necessary. We're clearly making little progress here.
I will take more stab at fixing the NPOV problem with the section this afternoon, as I said above. If that doesn't resolve it to your satisfaction, I believe we should seek mediation. I cannot continue to discuss this with you on good faith, based on your behavioral history in this discussion. Thanks. Bjsiders 19:43, 10 March 2006 (UTC)
You are not giving an accurate portrayal of what I have done at all. Your Grutter v. Bollinger data was not sourced, there was no source tag after the statistics sited. I tagged it for a week as needing sources, and no sources were added, thus I assumed there were none and removed it. I apologize that I called it a "Michigan Study" -- the writing in the article was quite unclear and I was misled by it. In either case, it was vague and indicated an "expert" whose testimony Judge Friedman said in his opinion could not be considered because he was, in fact, not an expert.
I charaterized something as "strong" relative to something else. You disagreed, and that is fair it can be removed or the wording can be changed. You also continue to mischaracterize what I do. I did in fact explain what the data means, by quoting the sources from LSAC.
You, however, have not addressed any of my claims, been generally uncooperative, and resorted to things such as claiming to high LSAT scores and being accepted to the law school of your choice. Unfortunately, I leave town tomorrow, so if you wish to enter meditation, I won't be around for 2 weeks. I also have a life to live and can no longer continue to fight you on this. As you have dominated this discussion page for awhile, I hope someone else might come along and continue to try to find a NPOV for this page. JKillah 19:53, 10 March 2006 (UTC)

(Resetting indentation) "Your Grutter v. Bollinger data was not sourced"

Yes it was. We've been over this.

"there was no source tag after the statistics sited."

It had been moved by another editor. We've been over this.

"I tagged it for a week as needing sources, and no sources were added, thus I assumed there were none and removed it."

I missed your tag, and that I will accept fault for. Like you, I have a life and don't live and die by Wikipedia. It's been a slow couple of days, I've had time to look over my watchlist.

"I apologize that I called it a "Michigan Study" -- the writing in the article was quite unclear and I was misled by it. In either case, it was vague and indicated an "expert" whose testimony Judge Friedman said in his opinion could not be considered because he was, in fact, not an expert."

I said, repeatedly, that editing the article to challenge the "expertness" if you will of this expert is fine. I have subsequently offered to remove all references to the Bollinger case. Still isn't good enough for you for some reason.

"I charaterized something as "strong" relative to something else."

No, your edit said: "LSAT score and law school performance are strongly statiscally correlated." It was never made clear that this characterization is only relative to UGPA. Weren't you just complaining that the writing was unclear and misleading?

"You disagreed"

I do not disagree that the correlation is strong when compared to UGPA. We've been over this. The article even already made this assertion before you made your edits.

"and that is fair it can be removed or the wording can be changed."

I agree completely, and I also agree with your suggestion that qualitative terms like "strong" and "weak" should probably be omitted entirely.

"I did in fact explain what the data means, by quoting the sources from LSAC."

But you were wrong. You said we can't use the covariance because we don't know what "law school performance" meant in the study. We can. You have very clearly stated your objections, and as I answer them, you just run off and find new objections. Once I settled your concerns about using the covariance you found ANOTHER sentence you don't like, asserted that it's not supported by the data, and I showed you that it most clearly is.

"You, however, have not addressed any of my claims"

Are you serious? Let's run through some of your claims.
"Statistics suppposedly from LSAC are cited with no sources"
I have provided sources.
"Facts such as statistics need to be sourced right after the claim."
Then move them there, don't delete the material.
"+.4 is not a weak correlation JKillah 16:58, 9 March 2006 (UTC)"
Yes it is. I gave you a source.
"the wikipedia article on correlation is wrong. JKillah 19:06, 9 March 2006 (UTC)"
I gave you MORE sources.
"You are right, .4 is not a strong correlation JKillah 21:16, 9 March 2006 (UTC)"
So how can you say I didn't address any of your claims?
"[W]e can't include your statistics in this page because they are not "well documented" despite your claims. JKillah 21:16, 9 March 2006 (UTC)"
I showed you documentation.
"As "law school performance" is not defined, using the R-squared to say the LSAT predicts 16% of grades is inaccurate."
Wrong, it's clearly defined in the material.
"not to mention, any test that 16% of the time could predict a student's exact grades over three years would be a remarkable test"
Wrong, that's not what LSAC's correlation study was testing.
"This paragraph also demonstrates how this interpretation of LSACS data is incorrect (a quote from the article)"
I pointed out that this comes directly from the same documentation.

"been generally uncooperative"

I'm sorry that you think I'm being uncooperative, but that's exactly how I feel about you. Everything you've come up with, I have rebutted, and you sit here and accuse me of failing to respond to any of it, while simultaneously ignoring the mountain of assertions you've made that are demonstrably false. I've offered concessions, I've offered to remove the Bollinger case that you didn't like, I've offered to rewrite the whole thing, none of it was sufficient. I feel that you've been very uncooperative too.

"and resorted to things such as claiming to high LSAT scores and being accepted to the law school of your choice."

We've been over this. I'll reprint the conversation for your convenience:
"If you personally hate the LSAT, or think it is an invalid test ..."
"I don't hate the LSAT. I took, did excellent on it, and got into the law school that I wanted to get into."
"How you did on the LSAT or what law school you attend is irrelevant to me and to this discusion."
"So is whether or not I "personally hate the LSAT" and you felt compelled to bring that topic into the argument."

"I hope someone else might come along and continue to try to find a NPOV for this page."

That is precisely what I am trying to do. I offered to remove the Bollinger case, which you feel is discredited. That wasn't good enough. I offered to re-write the whole thing, and cited which bullet points I think should be included. You came back with absolutely zero discussion, analysis, or examination of those points. You offered only some barebones facts from the LSAC data, which mean absolutely nothing to the majority of readers who do not know, offhand, what a correlation coefficient means.
I've ignored your ad hominem attacks, such as accusations of bullying and of trying to push an agenda. I'm sure from your point of view it looks like you've throwing the facts at me and I'm responding irrationally, but that's exactly how you appear to me, too. If my tone is curt and irritated it's because I'm irritated. From my point of view on this discussion, this is how it feels like it's going:
"I changed the article. It said 2+2 = 4. I changed it to 5. This pamphlet says it's 5."
"It is 4."
"No it isn't. Prove it. My pamphlet is unsourced and I'm ok with using it as my reference but you have to have sources."
"Here's the source. (SOURCE)"
"That source doesn't exist."
"Yes it does."
"Ok well you are misinterpretting it, they don't say what '2' is."
"Yes they do."
"Ok well you also said 2*2=4, when it's 5. You got that from misinterpretting the data."
"No, it is 4. From the same source."
"You're uncooperative and pushing an agenda."
I mean seriously, I feel like I'm being very indulgent of you. As I said before, we're clearly having something of a personality clash here and making no progress. I feel that you are doing all the things that you are accusing me of doing, and not answering any of my facts or charges (which you also accuse me of doing). Here they are all. Is there anything else I can answer? I seriously do want to edit the page to both of our satisfactions, if possible. Since we're both agreed, at this point, to using this data source, as it appears to be the source of both your promotional pamphlet from Boston College, and the testimony of the Bollinger case, what assertions have I made that you disagree with? And which of your points have I not responded to? Here's, once more, a rundown of what I think the analysis section should include:
Theme: "There is a statistically significant correlation between LSAT score and first year law school performance. This correlation is stronger than UGPA alone, but weaker than a combined index of LSAT score and UGPA."
Supporting Evidence: "LSAC performed a correlation study that demonstrated an average 0.41 correlation between LSAT score and first year grades, which means that approximately 16% of first-year grades could be predicted by LSAT score. UGPA by comparison is only 0.25 correlation which is statistically insignificant (6.25%). LSAT + UGPA, however, average close to 0.5, or 25% of first year grades."
Additional Information: "This is the best known predictor of first-year performance. The ABA requires an examination of applicants to accredited programs, and since the LSAT is a critical component of the best known predictor, it is used by almost all North American law schools as part of its admissions process."
That's what the article said before. And that's what I think it should continue to say. I will try to rewrite this in as soft and neutral of a tone as I can come up with, and I will post it here. I will remove all references to the Bollinger case, I'll include your BC pamphlet source, and I'll remove the part about anecdotal evidence of stat floating with law school rankings. When you return from your trip, please review it and offer your feedback.
I'm sorry to "dominate" the discussion page with my long posts, but I'm trying to be clear and unambiguous in making my points and presenting my case. This is about as thorough and clear as I can be. I hope that all the quoting makes it clear which parts of the discussion I am referring to. If this doesn't do it, I don't know what will.
Thanks, Bjsiders 22:33, 10 March 2006 (UTC)

General Page Improvement

I have also expanded several of the sections and added more information about the test itself, so that the page is no longer dominated by criticism of the test while weak on actual information about the test itself JKillah 14:35, 6 March 2006 (UTC)

This Page is in Need of Better Sources

As no one has been able to provide any sources for this Michigan study, I have removed that information and replaced it with statistics that have sources. I have made clear, however, that the LSAT is not a perfect predictor of law school performance. Unless there is still disagreement, I think the npov tag can be removed JKillah 14:32, 6 March 2006 (UTC)

What Michigan study? None was referenced in the article. The only Michigan reference is already sourced. If you want more sources, try [6] or [7] or [8] or [9]. The source is expert testimony. The validity of the expert may be questioned if you like but please don't remove the source. Bjsiders 17:39, 6 March 2006 (UTC)

And these experts are totally unbiased? In their "suggestion" of a correlation? What does that mean? Whats worse -- all four of those links do not work! [10] JKillah 17:06, 9 March 2006 (UTC)

Are you suggesting that your study is demonstrable unbiased and another isn't? I fixed the links, check them again. Bjsiders

There needs to be a source added for the 'Michigan study' claiming that law school grades do not correlate with LSAT scores, or it needs to be removed. In a packet of "frequently asked questions" I received along with an application from Boston College Law School this study was mentioned and labeled a "rumor," and that LSAT scores correlate highly with law school performance (the number in this information packet is 84%). JKillah 19:24, 28 February 2006 (UTC)

Again, what study? There is no Michigan study referenced in the article. There's a legal case involving the University of Michigan, and the article already had a source for it from the University of Toledo Law Review. I'm also suspicious of an informational pamphlet that defends the LSAT coming from a law school that has a vested interested in defending it. I think I trust the testmaker more, whose interest is in getting people to weigh their test as heavily as possible, but is saying the opposite - that schools are overrelying on it. The article already explores various possibilities for this, such as ABA requirements, and the article already defends use of the LSAT as the best tool available by which to measure applicants. The data is all sourced. Add your correlation information from your pamphlet but don't remove a bunch of sourced data from legal cases and law journals because you read an informational pamphlet from Boston College that disagrees. Bjsiders 17:39, 6 March 2006 (UTC)

Where is this source data of these law journals? If you don't like the BC info in its nice format, here is a link to the study, which was done by LSAC [11]JKillah 17:13, 9 March 2006 (UTC)

It's not that I don't like the BC info, I'm sure it's perfectly valid. So are other sources that offer another perspective. My objection is that you removed an entire section of sourced and documented material to replace it with something that says the opposite. What's worse, you complain about me doing this in this talk page. Bjsiders 18:25, 9 March 2006 (UTC)

As I read over it, several other problems also plague this page. Statistics suppposedly from LSAC are cited with no sources, and editors seem to have compromised the neutral point of view. I am tagging this page as needing sources and needing cleanup for these reasons. JKillah 19:34, 28 February 2006 (UTC)

It's all sourced under "External Resources." Bjsiders 17:39, 6 March 2006 (UTC)

Facts such as statistics need to be sourced right after the claim. Genearl information resources can go at the end JKillah 16:58, 9 March 2006 (UTC)

They were when I originally editted it. If somebody else moved them to the wrong place, you should move them to where they belong rather than just remove all the material you don't like. Bjsiders 18:25, 9 March 2006 (UTC)

From Boston College School of Law article titled "Eight Common Misconceptions about the LSAT":

"1. The LSAT works only 16 percent of the time.

There is a great deal of confusion about the meaning of correlation-study results. Correlations are reported on a scale of -1.0 to +1.0, with -1.0 representing a perfect inverse relationship--as e measure goes up the other goes down--and +1.0 representing a perfect positive relationship. The national correlation between LSAT scores and first-year grades tends to be around +0.4. By comparison, the national correlation between undergraduate and law school grades tends to be around +0.25. The correlation for both variables combined is approximately +0.5.

The relationships among LSAT scores, undergraduate grades, and law school grades are all fairly strong, particularly when one considers all of the many and varied personal factors that have an impact on performance in law school--factors that include study habits, determination, work or family obligations, quality of instruction, and many, many others.

The LSAT is used to make admission decisions, not to explain performance variance. These two purposes are very different.

The bottom line is that the LSAT, although limited in its utility, is the single strongest numerical predictor of success in the first year of law school that is available to an admission committee when admission decisions must be made." [12] JKillah 14:08, 6 March 2006 (UTC)

This is EXACTLY what the article already said. LSAT correlation to first year performance is weak but there's no known better predictor. Bjsiders 17:39, 6 March 2006 (UTC)

+.4 is not a weak correlation JKillah 16:58, 9 March 2006 (UTC)

So remove the word "weak" rather than yank an entire section of sourced material. You're trying to correct a POV problem by removing a bunch of sourced content because you don't like one adjective? Bjsiders 18:25, 9 March 2006 (UTC)
Also, you are wrong. A correlation of under .4 is considered no correlation at all, and anything from .4 to just under .9 is considered a "low" correlation. .9 and above is a strong correlation. Your own sources verify what the article already says - the correlation is statistically weak. I am going to replace your characterization of the correlation. [[13]] Bjsiders 18:29, 9 March 2006 (UTC)

Yes, the wikipedia article on correlation is wrong. I don't know why they list two "low" categories, i think that is probably a mistake. the more standard 0-.33 being "weak"; .33-.67 being "medium" and over .67 being "strong." above .9 is nearly perfect and rarely achieved. A pearson coefficient of +.4 is not strong but niether is it weak. combined with gardes at about .5 is a decent predictor. JKillah 19:06, 9 March 2006 (UTC)

That begs of question of why you then characterized a .4 as a "strong" correlation when the article here says it's weak and you yourself say it's at the low end of medium. I'm having trouble assuming good faith from you, sir. Bjsiders 20:28, 9 March 2006 (UTC)

Well I'm sorry that you are having those problems with me (though I'm not a sir), but I think they are misplaced. My characterization of "strong" was in comparison to GPA, which being below .33 is relatively weak compared to the LSAT. That aside, though, my goal is to make this page accurate, I don't have an agenda one way or the other. Please review my comments regarding Judge Friedman's comments above so that we can change the page and remove the NPOV tag. JKillah 20:48, 9 March 2006 (UTC)

Also, you cannot remove sourced material to replace with your material that appears to come from broken links or nowhere. JKillah 17:06, 9 March 2006 (UTC)

But ... you just did that. You removed a bunch of sourced material and replaced it with your own material. I am putting the other material back in, but I am leaving yours in place to ensure that the article represents a tapestry of analyses of the topic. Bjsiders 18:25, 9 March 2006 (UTC)


I didn't remove anything. I just added a pargraph. JKillah 19:19, 9 March 2006 (UTC)

Analytical Reasoning

This statement has been removed: "This section is perhaps the most coachable section of the LSAT; as a consequence, its complexity and difficulty has increased steadily since the late 1990s." The opposite is actually true. The analytical reasoning section has been de-emphasized throughout the late 1990s because of its coachability. It is now much easier, and usually shorter than it was in the early 1990s. JKillah 14:15, 6 March 2006 (UTC)

Far from it. The section may be shortened, but that's because the individual questions are more difficult on balance. Any prep documentation can tell you this. If you'd like a source, one, another. Any LSAT prep literature will say the same thing. Try anything from Kaplan or the Princeton Review. I'm putting the statement back in unless you can source your information. Bjsiders 17:39, 6 March 2006 (UTC)

One of the sources you cite above is not a real page, and the other is not a legitimate source. I am a trained LSAT prep tester for a large test-prep company. I know the logic games section has become easier from my training and from my personal experience with the test. JKillah 17:02, 9 March 2006 (UTC)

Analytical Reasoning has most definately become easier. Most LSAT prep tutors and experienced test takers generally regard the 2003 tests as the turning point in which logic games were made significantly easier, while reading comp's difficulty was maximized. Raw scores have also increased since 2003, which provides further evidence of a decrease in logic games' difficulty. The link below shows the increase in raw scores. http://www.powerscore.com/lsat/help/correct_targeted.htm

I agree with JKillah's comments about the change in analytical reasoning's difficulty, though the change was not dramatic. The previous comment is correct in that the change happened after 2003. I added it back in to the page tying it with the parallel increase in reading comprehension difficulty. Icebox93 16 November 2006

THE PROBLEM OF THE LSAT


There is at least one potentional inaccuracy as well as some nagging issues contained within this article. The third paragraph of the section entitled Criticism contains the term "naturally intelligent". Never has it been proven that "natural intelligence" exists. Also, if the elite schools never or only rarely admit candidates with lower than a certain score (the "cut-off"), evaluation of performance of these individuals is impossible. Related to this second point we must consider the "snowball effect" or law of self perpetuation. A J.D. student at an elite school will perceive a superior opportunity amoung his peer group ( as a class most law students are are early twenty something baby echoers), experience the tutelage of uniformly Ivy League degreed often celebrity professors, etc. He or she will therefore have incentive to be more productive--any Harvard business student is familiar with the Hawthorne Effect. Therefore any correlation between score and grades is suspect and claims of predictive validity compromised. Nostradamus made predictions too. There is no reason to think that a less privileged individual would not perform equally well in a similiar environment. This should apply at all levels. Even if we entertain the stastical argument serious issues arise. For example, the predictive validity of this test is .41. We can therefore infer that already admissions errors are being made for this year. Also, interesting to consider is the fact that the elite preparatory schools boast stellar Ivy League placement rates, but a child does not choose his elementary school. Placement in school is usually a function of the parent's social status. The consequences for university admissions are obvious and disturbing. The far-reaching effects are equally disturbing in that a recent decision of the Supreme Court of the United States contained an opinion expressing the sentiment that quality of school does not matter in later life. However, 5 of the current justices are Harvard Law graduates--6, including Roberts!(1) Returning more to the point, over the past 3 years (classes of '04, '05, '06) in excess of twenty five percent of Yale Law's graduating classes were alumni of two schools--Harvard and Yale.(2) The likelihood that of tens of thousands of individuals presenting credentials from less prestigious undergraduate institutions more were not qualified for the two most elite schools would seem to be miniscule. Consider as well that the elite schools only admit a very small number of transfers when in reality all prospective transfers with higher averages than currently enrolled students should be admitted and the currently enrolled released from tutelage. Since the latter class of students is not dismissed we are compelled to examine the rationale for these latter student's original admissions. The inference that a double standard exists can be justified.

The LSAT as well as quality of school are not the solution to the problem --they ARE THE PROBLEM. The solution to the problem of Ivy League admissions (which is the point of the LSAT) is at best elusive and at worst unattainable. Not until such factors as standarized test scores and quality of school are discredited, debunked and abolished will fairness and the solution be achieved. 1413361

1 New York Times

2 The Bulletin of The Yale School of Law 2004-2005


I've made several edits to this page and somebody keeps adding back in factually incorrect material and re-organizing a bunch of PoV stuff (some of which I contributed). I've removed all of this content and instead quoted generously from a respected law review regarding the issue, including as many cites from LSAC itself as I could. The concensus seems to be that the LSAT is the best known predictor of law school performance, but it's correlation is pretty low. It's almost always cited as being under 50%, and sometimes as low as under 20%. Based on this, I feel that the criticism section is merited, and I've sourced these materials. I'd like for somebody to dig up other studies and citations that provide higher correlations, if you can find any. The one I dug up was from the recent lawsuit against the University of Michigan, and an "expert" testimony quoted 16-20%. I think most other studies place it higher. In any case, I feel that the section is more honest and comprehensive now, and more useful to somebody doing research. I hope nobody takes offense at the great swath of material that I removed, but upon reading it, it was clear to me that two users were going back and forth with PoV edits, re-arranging content to make one side or the other appear to be the "right" conclusion. Bjsiders 18:20, 29 September 2005 (UTC)

User 68.40.103.126 made some extensive changes to the Critism section that amounted to a point-by-point rebuttal of cited material with PoV, and a "source" at the end that had nothing to do with any of the rebuttals, most of which were assumptions, presuppositions, and opinions. I left the source in, as it's an excellent professional study of the impact of admissions practices on minority law students. The study, however, mostly backed up the existing material in the article and provided no basis of support for the half-dozen "However..." clauses that 68.40.103.126 added to the page.

This statement was added, "The motivation for such studies appears to be the fact that minorities generally do worse on such tests, presumably because they have less educational opportunity. However, this would seem to be more an indictment of elementary public education than the exam per se." Statements like "appears to be" and "presumably" suggest a bit of a PoV here. I was also unable to substantiate this statement with the given source. The source cites is about affirmative action admissions policies and doesn't cover elementary public education in any fashion. This suggests to me that the user in question is espousing a PoV on this.

Another change: "However, other evidence indicates that the predictive value of the test actually increases after the first year." None of this evidence is presented, it's only claimed to exist. Another: "However, this correlation is based on the current admissions regime where students at all schools are fairly close in terms of LSAT range. (Clearly, any correlation will be minimized when variance is small.)" Knowing little about statistics, I can't speak to the validity of whether or not correlations are "minimized" when the variance is small. Also, the statement about "admissions regimes" and LSAT score ranges being close in all schools is confusing at least. There are schools with LSAT scores that range in the upper 140's to low 150's, and those that accept a tight range of high-scoring students only. The statement needs clarification and supporting evidence, and an explanation of the relationship between variance and correlation.

More changes: "Others argue that if taking a LSAT preparation course suposedly improves an applicant’s score by an average of about seven points (as claimed by some companies)". This isn't "claimed by some companies," this is a documented fact submitted before the United States Supreme Court. Kaplan and other such companies rare make such bold claims, they only offer your money back if you don't improve your score.

More: "However, such score increases are disputed, and generally only reflect the difference between someone with no exposure to the test, and the same person after completing the course. No comparisons have been made to those who self-studied for the same period of time with inexpensive preparation materials (which can arguably be equally effective.)" The score increases are not disputed, they're well-documented. Personally, I agree with the user's opinion that the score increases a reflection of preparation itself rather than hire preparation specifically. Regardless, the statement that the increases "generally only reflect" anything is complete point of view with no supporting documentation. In fact, the user states in the next sentence that no documentation to support this exists. It's nothing but an opinion, and although I agree with it, it's out of place here.

Another one: "However, a recent study by a UCLA law professor has indicated that students who are admitted into law schools with lower LSAT scores than the overall student body will generally place at the bottom of the class, will generally have an inferior educational experience, and will be more likely to fail the bar exam than if they had studied at a school in line with their numbers. Such results support the argument that the exam is a generally meritocratic and rational tool for placing students at schools that reflect their academic abilities."

This isn't what the professor in question was studying at all. His study was specifically about affirmative action policies in law schools admissions and how they damage the educational experience for minority students. The study also does not focus on the LSAT in any specific or extended manner. All admissions criteria are examined in detail and the LSAT receives no particular treatment, and no evidence is provided to support the last statement of this paragraph.

In summary, I removed all of these edits. I suspect they were made by the same person who continued to re-arrange the criticism section before to ensure that his PoV on the topic was the "final word." This kind of lexical gerrymandering is specifically why I re-wrote and sourced the passage. The additions are unprofessional, unencyclopedic, and in my opinion, PoV. The cite itself is a useful addition to the topic, but the various "However," rebuttals are not.

Bjsiders 14:59, 3 October 2005 (UTC)


The same user continues to edit the page and add flat out opinions as rebuttals to each cited fact presented in the Criticisms section. We may need to have some kind of arbitratation here, as the user is not participating in the discussion, only making changes. The latest crop:

"On the other hand, qualified applicants with strong aptitude who had to work in college, potentially impacting their grades, can also use the exam to highlight their true abilities." Plenty of people with excellent grades do poorly on standardized testing. This is, in any case, an opinion.

"Such arguements apparently derive from the fact that disadvantaged groups tend to struggle with the exam. However, it is not clear why improving elementary education for such groups would not be a more effective approach for increasing diversity." This is an opinion.

"However, this 'weak' correlation is based on the current regime where LSAT ranges are each school are fairly narrow. More open admissions policies would presumably produce greater correlations." Here, the user actually removed a set of facts quoted from a source in the article and replaced them with this opinion. He quoted the word 'weak' also, which wasn't used anywhere in the cited material. This is pure PoV. Wikipedia's purpose isn't to prognosticate about what would "presumably" true if something else changed in the future.

"However, a recent study by a UCLA law professor has noted that students with lower LSAT's than their fellow students will generally place near the bottom of their class, have an inferior educational experience, and have more trouble passing the bar exam." Again, I looked over the source material, and didn't find this conclusion mentioned. Further, this statement doesn't really flow from what it's intending to counter. The user needs to quote the source here, in my opinion, to support this fact. I'm also growing concerned that we're going to have this article turn into a back-and-forth of quotes and citations from people with opposing opinions on the matter. I have quoted LSAC, the people who actually create and administer the exam, and expert testimony from the most recent United States Supreme Court decision that involved the exam in any fashion.

I would ask that user 68.40.103.126 please participate in the discussion here on the exam, and refrain from adding opinions to the article until we can hash out an agreement in here on what constitutes a factual, encyclopedic entry on the matter. The user clearly has a strong bias in favor of the exam and is mounting a defense of it. I honestly have no strong opinion either way. I tend to perform well on standardized tests, so they've tended to help me along in my academic career. However, in researching the LSAT, especially following the SCOTUS case about the University of Michigan, it seems clear to me that some criticism is warranted; the category of criticisms seems more leveraged towards law school admissions policies than the exam itself. The exam is a tool, and if it's misused by admissions councils, that's no indictment of the tool itself. Perhaps a revision to the article along those lines, that clarifies criticism of the tool vs criticisms of its application would satisfy 68.40.103.126?

Bjsiders 14:20, 5 October 2005 (UTC)

Use 68.40.103.126 has once again made a PoV edit. This time (s)he simply deleted all the sourced, cited material that (s)he didn't care for and replaced it with a PoV statement that can be fairly summed up as, "people who criticize this test just did poorly on it because they're too lazy to study." This is the third time I've reverted this page. I've suggested here several times that 68.40.103.126 discuss his edits here, and I've proposed a modification to the layout/language of the page to include 68.40.103.126's defense of the exam, but so far he continues to make PoV edits and not discuss any of it. I'm not sure what to do at this point. Bjsiders 16:36, 11 October 2005 (UTC)

Multiple LSAT Scores

Because of the new ABA rule that requires law schools to only report a matriculant's highest score (given that they took the test more than once), I will make the necessary adjustments. I think we should continue to emphasize that students should only take the test once no matter what.

LSAT is spelled wrong

LSAC stands for "Law School Admission Council" and LSAT stands for "Law School Admission Test", so the 's' in 'Admissions' needs to be removed throughout this entry. See www.lsac.org for confirmation.

Done. I also moved the LSAC article appropriately. Bjsiders 16:42, 9 December 2005 (UTC)

Studying for the LSAT

I agree with mrosscan. According to official literature from LSAC, the LSAT "can and should be studied for." However, this advice should only be taken to a point, as they also point out that people who take an LSAT prep course score an average of 1 point higher than those that did not. To me this suggests that the test is "studiable" to a point, but perhaps not everyone can make a perfect score regardless of practice time. I have added that detail and removed the neutral point of view tag. In my opinion as both a law student an a trained LSAT coach, the neutral point of view is no longer compromised for that section. JKillah 13:56, 6 March 2006 (UTC)

It was always my understanding that the LSAT was a test designed to test fundamental reasoning abiltiies (described in the article), similar to that of an IQ test. Its purpose, after all, is to judge whether a candidate has the mindset to function well as a lawyer. Thus, preparation can only increase one's score to a certain point, much like getting a feel for the exam, as opposed to actually getting better at it.

That part of the article seems to overstate the importance of preparation for it, although likely getting familiar with the exam would at least slightly improve one's mark. mrosscan 17:18, 28 Feb 2006

The LSAT is a very predictable test and thus unusually coachable. It tests no particular set of knowledge, like the MCAT does. It tests a specific set of skills in a very specific and and narrow manner, and that makes it easy to prepare. It's not that preparation is critical to doing well, it's that almost anybody can significantly improve their score by practicing. Bjsiders 13:59, 1 March 2006 (UTC)

Allow me to elaborate. There's more to this than just knowing what types of questions will be on the test. For each arguments section, there will be 6-8 "identify the assumption" questions, 6-8 "strengthen or weaken" questions, etc, etc. By taking a prep class or just learning how many questions of each type one can expect to encounter, one can min/max his score. If I know that the test will only contain 1 or 2 questions of a certain type, why spend tons of prep time practicing with those types of questions? You can micromanage your skills down to the per-question level, and almost everybody experiences incredible improvements in the games section. The types of questions, the skills for solving them, the layout of the exam, it's all very structured, rigid, predictable, and coachable. There's no specific knowledge one needs to have to succeed, there's nothing to "study" for or memorize to do better. You just practice the most common question types. There's a reason why Kaplan and all these other companies can offer you your money back if you don't get a better score. The purpose of the test is absolutely NOT to "judge whether a candidate has the mindset to function well as a lawyer." LSAC doesn't claim that and neither does anybody else. The purpose of the test is to quantify the student's aptitude for 1L, and that's it. And the test isn't even very good at doing that, it's just better than anything else we've come up with so far. I'm puzzled as to why it's not a neutral POV to note that when taking a test that is factored heavily into a life-altering decision, one should prepare for it. Bjsiders 16:02, 1 March 2006 (UTC)

There are a couple of complaints I have with this page. First of all, if you're going to say that mere preparation can result in "significant" improvement, you ought to provide some evidence. Otherwise, it's just speculation. I think that part is too strong. You can certainly say that one must prepare simply to do his/her's best, but if you cannot provide evidence of "significant" improvement or even define what "significant" means (because it's different depending on which law school you talk about), then I think a modification is necessary. There is a difference between, say, a modest four point improvement, as compared to a twelve point improvement. Keep in mind that Bjsiders probably did well on the LSAT and thinks it's coachable, but I personally know many students who failed to improve in spite of rigorous preparation. I do not question the importance, but rather the coachability aspect of it.

Along those lines, another strong statement that needs to be removed or watered down is "A score below 150 may make it difficult to be admitted to an ABA-accredited law school." First of all, why is it there? For that matter, why is the previous statement there? I thought we were talking about the LSAT here. Anyways, establishing arbitrary cutoff lines is something that most law school say they don't do (though I tend to believe it's only in a few cases). But even more importantly, 150 sounds so random. Did you ever consider the notion that a student with a 149 LSAT and a 3.8 GPA may index higher than a student with 151 LSAT and 3.0 GPA? There are so many cases where this statement is wrong that I believe it should be eliminated altogether or watered down even more, in spite of "may." It doesn't tell the whole story, so get rid of it or understand the context better.

Since this page has not been updated in a while, I will go forth and make the changes I proposed above. —Preceding unsigned comment added by 141.213.170.81 (talkcontribs)

I think you need to allow for a little bit of generalization. I don't think it's unfair or inaccurate to say the exam is coachable. It may be inaccurate to say every student responds to preparation, or that every student can be coached. A quick look around the net yields a plethora of organizations describing the coachability of the LSAT. Granted, most are offering coaching services.
Also, nowhere did the article say that "mere preparation can result in significant improvement," so I see no reason to cite that claim. It said the exam responds significantly to preparation, and it does. That does not mean that every single person who studies does better. I do agree that exam is coachable, but what I think and how I performed are irrelevent to this discussion. Like you, I personally know students who prepared, and one who took the Kaplan class twice and scored worse after taking it each time. That doesn't mean the exam isn't coachable. It means it's not coachable for every single person. If we go around removing any generalization in any article for which we have personal anecdotes that do not match up with it, there would be nothing in Wikipedia but links to external articles and tables of facts. As I said in opening this response, you need to allow for a little bit of generalization.
I don't know about the "below 150" remark. There are plenty of lower-tier programs that accept applicants with scores in the high 140's and low 150's. Obviously, depending on the admissions indexing formula of a given program, the various weights of GPA and LSAT score, even with a 145 you might find a place to get in. I agree with the statement, in that it "MAY" make it difficult, but it's certainly not impossible. As somebody with a horrible undergraduate GPA, yes, I have considered a multitude of scenarios involving various combinations of LSAT score and GPA while determining whether to apply this year or do more undergraduate work first. I have no objection to removing the line completely.
I object to removing the assertion that LSAT is coachable. Well, I object if we're removing it for the reasons you have given, anyway. They are, if I understand you correctly, rebutting one claim that the article didn't make, and asserting that you personally know people who did no better after rigorous preparation. I don't think either claim justifies removing the coachability adjective, nor de-emphasizing how important preparation is. I'm amendable to making the language more clear and less ambiguous, however, and I'd like to discuss it more rather than just go edit your improvements. I also think the definition of "significant" is also irrelevent, but if you want to remove the term entirely, there's no strong argument I can think of to keep it. Bjsiders 15:21, 6 April 2006 (UTC)

Ok, fair enough. I agree that there must be some room for generalization. I am absolutely supportive of keeping the strong language about the importance of preparing for the LSAT. The "coachability" aspect of it still troubles me because it seems to imply that you can simply hire tutors who will show you how to beat the test, thereby leading to a "significant" improvement. I think our disagreement is over word choice. Anyways, instead of using the words "coachability," I guess you can change the phraseology to "Most students can improve, sometimes significantly, to guided preparation." Or, you can keep the current language (while eliminating "significantly"), but I would then write a statement about how the LSAT is not coachable to everyone.

I think you're right. The exam is "coachable" because you can learn how to take it. You can learn techniques and tricks to do well on it. Tests that examine specific pieces of knowledge aren't as coachable. You either have the knowledge or you don't. But there is a general method to solving a lot of LSAT problems that can be taught, and that's what makes it coachable. Whether or not every student learns them well enough to use them to his advantage is another issue. You CAN hire tutors who will show you not how to "beat" the test but how to eliminate wrong answers. You need only look at the vast improvement that most students experience in their games section once they learn how to tackle them for evidence of this. The LSAT is a coachable test. I think that's an objective and factual statement. That does not, in my opinion, necessarily imply that ALL people can learn to "beat" it, and I find the caveat unnecessary. And, again, the article doesn't assert that preparation leads to "significant improvement" but that the LSAT is "significantly responsive" to preparation. And I think that's a fair statement. Bjsiders 21:31, 6 April 2006 (UTC)

Okay, I think the issue is resolved.

Statistics on correlation between 1L grades and LSAT scores, etc

Hi, in the "Use of the LSAT in Law School Admissions" section some of the statistical language is wrong, sometimes subtly, and sometimes egregiously. I fixed the most egregious error myself, but it still says things that imply that a certain magnitude correlation coeffecient is statistically significant, while a lower magnitude is not (untrue, the two are not related, and you could have low significance, high correlation coeffecient, etc). I haven't gone through it more thoroughly because I mostly don't think I have enough expertise or knowledge to do so and be sure I'm not just mucking it up more. Is there anyone that could perhaps please fix that section? Or for the meantime, advise as to an appropriate template to place there? Thanks. D. G. 01:49, 30 June 2006 (UTC)

I favor removing all of the coefficient and math completely. I think the original criticm section was sufficient to convey WHY people criticize the LSAT. We had one guy show up here a few months back and, after reading a pamphlet from some law school that talked up how good the LSAT is, wanted to argue against the criticism section. In the end, it turns out his pamphlet quoted the same study that the article did, it just spun the study differently. After a long back and forth he vanished for a few weeks, I re-wrote the whole section to address his concerns , and he never came back to get involved. I favor yanking all the tedious exposition about corellation coefficients and everything else. Bjsiders 04:28, 30 June 2006 (UTC)

That's not a bad idea; I hadn't thought of that. There isn't a need for Wikipedia to expound on the actual statistics involved, just the punchline, the actual conclusions. Consensus? Does anyone know if there's any guidelines on the use of statistics? D. G. 18:37, 30 June 2006 (UTC)

I yanked all of it and just summarized how the admission index worked. I'm going to re-write the criticism section as a separate section later next week, and I'll try to leave all the math out of it. Bjsiders 19:11, 30 June 2006 (UTC)

Thanks. It's a lot more clear now that it's rewritten. I added back in the reference note because I'm not sure if you meant to remove it or not; as it is now there's not really any source for that general information. But let me know if I'm mistaken about that and it actually shouldn't be there. D. G. 17:17, 2 July 2006 (UTC)

Further Commentary

Any arguments concerning statistics in this issue are purely absurd. At each level priveleged young men and women are enrolled at expensive prep schools. In effect their baby boomer parents like us buy them high scores. Arguing over coeffecients of correlation is analogical to arguing over the fine points of alchemy. Of course the men and women of SCOTUS and all other level of government are well aware of this. Imagine how many baby boomers weren't admitted to our bsest universities because of teenagers!

I don't understand your point. You think the correlation statisticss don't belong because their parents buy them high scores with expensive undergraduate programs? Bjsiders 12:28, 10 July 2006 (UTC)

I'm not sure I follow the point of the argument, if any, by the anonymous user. D. G. 05:31, 11 July 2006 (UTC)

Experimental section

The last comment in there strikes me as pretty POV... "The experimental section also amounts to unpaid research being done on LSAC's behalf by examinees who are already paying for the testing." Upon reading it for the first time it sounds like editorializing, regardless of whether it's factual. I don't think that, without a source cited, it's appropriate. If that's a legitimate criticism some have of the LSAT, that's fine, but Wikipedia (I understand) is not a place for original research. Mkilly 05:20, 8 August 2006 (UTC)

I added that line in. It was mentioned in both my copy of the Princeton Review and the Kaplan class I took on the LSAT, so it seemed to me to be a fairly common criticism of the experimental section. I thought LSAC had an excellent rebuttal for it - how else do they test questions fairly? If they pay people to take the test or test the questions in any situation other than a real testing situation, how do they know that the question is clear, fair, and at the appropriate level of difficulty? Bjsiders 13:21, 8 August 2006 (UTC)
I guess I just really don't like the phrasing, because it sounds tacked-on out of spite from some guy that was rejected by Ann Arbor or whatever. That is a good rebuttal; why isn't it in the page? Mkilly 00:25, 16 August 2006 (UTC)
Because it amounts to original research. The stuff about "unpaid research" is what other sources say about the section. I think LSAT could mount an excellent defense of the practice but I've not found a quote or source to that affect. Bjsiders 01:00, 16 August 2006 (UTC)

Lowest Grade for admission to law school

this article was very good (IMO), but it didn't answer a crucial question of mine. What grade is necessary to be admitted into most schools. I know law schools take into account a number of factors (GPA, References, previous work), but what's the minimum someone should shoot for on the LSAT? Scott Free 07:06, 21 October 2006 (UTC)