Talk:Gap analysis

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Other uses[edit]

There is a scientific usage of 'gap analysis' as well. The meaning is generally the same, but relates largely to knowledge gaps that need to be addressed for, say, wildlife management decisions to be made effectively.

Gap analysis seems to be a subset of business/commerce and economics, especially it would seem economics as it is a measure of utilized resources vs. potential output, very similiar to the production possibilities frontier. --ShaunMacPherson 14:43, 29 January 2007 (UTC)[reply]

Bold text

Disambiguation: Gap analysis (business) and Gap analysis (wildlife conservation)[edit]

Dear Wikipedians, can we change the title of this article to 'Gap analysis (business)' so that it distinguishes between Gap analysis in business and in wildlife conservation? I am about to create an article for Gap analysis (wildlife conservation) so I wonder if its okay to proceed with this. Thank you. AppleJuggler 06:10, 6 December 2006 (UTC)[reply]

Gap Analysis Program (GAP)[edit]

This is my first Wikipedia entry, but here's an entry for the US National Gap Analysis Program (GAP). Please discuss and add to and correct as needed as part of the new Gap analysis (wildlife conservation) page.

Gap Analysis is a GIScience application that has become a major tool in conservation planning since the 1980s. The original methods (Scott et al. 1993, Jennings 2000) include: mapping land cover type (usually vegetation); modeling species distributions based on predicted habitat suitability and subsequent field verification; and determining land stewardship and management status. ‘Gaps’ in the conservation network occur where areas of high species diversity or rare ecosystem type are not well-represented within protected areas.

1 U.S. National Gap Analysis Program (GAP)
The gap analysis methodology was formalized in 1987 by the USGS National Gap Analysis Program (GAP), providing a scientific, spatial analysis of the potential effects of habitat fragmentation on species viability. By 2003, the National GAP, a collaboration of hundreds multiple federal and state government agencies, universities, non-profit organizations, conservation groups, tribes, and businesses, had systematically analyzed the conservation networks of each of 48 conterminous United States. The stated goal of the National GAP is “keeping common species common” because it is assumed that protecting species is both easier and less expensive before they are threatened with extinction. The resulting extensive scientific dataset of land management/ownership, vegetation cover and terrestrial vertebrate distribution is mapped at the landscape scale (1:100,000), which permits regional analysis beyond state boundaries and has resulted in applications of gap analysis beyond the scope of the National GAP. Maps and data are available for free download at the USGS/NBII website: [1].

2 Critiques and Limitations of Gap Analysis
2.1 Threat Indicators, Scale Dependence & the ‘Modifiable Areal Unit Problem
Indicators of human threats, such as population growth, land use, and road density have been proposed to enhance gap analysis and further prioritize which ‘gaps’ are most immediately threatened. However, because species responses to threats vary, gap analysis can only portray potential threats. Indicators of conservation value, such as species richness, have no inherent spatial scale. Thus, the optimal scale range for the minimum mapping unit (MMU) is determined on a case-by-case basis, compromising scientific credibility with data availability and cost effectiveness. Scale dependence of the MMU as a variant of the ‘Modifiable Areal Unit Problem’, or MAUP (Stoms 1994). The larger the MMU, the more species it will contain, either over-generalizing species richness by using large units or increasing statistical uncertainty for habitat distributions by using small units. Scale dependence introduces statistical error in spatial analysis.

2.2 Mapping Uncertainty
Predicted species habitat distributions in GAP data contain numerous errors of commission (attributing presence where a species is absent) and errors of omission (attributing absence where a species is present) resulting in large composite error when map layers are combined. Despite this fact, species distribution maps produced by gap analysis rarely incorporate error into the visual representation. In gap analysis applications, it can result in dramatically different conservation recommendations (Flather et al 1997). In addition, residual multiscale sampling effects can be identified using a statistical covariation measure, such as sensitivity analysis.

2.3 The ‘shifting baseline syndrome
The baseline for all National GAP projects is determined by the satellite data used to determine the vegetation cover that predicts species habitat distribution, which already includes a large percentage of anthropogenic land uses. First, because historic species distribution is not known, gap analysis results are a mere fraction of any species original habitat. Also, the static nature of gap analysis currently is not able to show the dynamic response capacity of species to change or species viability over time (Jennings 2000). Shifting baselines require that gap analysis incorporates a case-by-case consideration of management goals and definitions of conservation success.

Literature Cited
Flather, Curtis H., Kenneth R. Wilson, Denis J. Dean, and William C. McComb. (1997). “Identifying gaps in conservation networks: of indicators and uncertainty in geographic-based analyses.” Ecological Applications. 7(2): 531-542.
Jennings, Michael J. (2000). “Gap analysis: concepts, methods, and recent results.” Landscape Ecology. 15: 5-20.
Scott, J. Michael, Davis, F., Csuti, B., Noss R., Butterfield, B., Groves, C., Anderson, H., Caicco, S., D’Erchia, F., Edwards, T.C., Jr., Ulliman, J., Wright, G. 1993. “Gap analysis: a geographic approach to biodiversity protection. Wildlife Monographs. 123:1-41.
Stoms, David M. 1994. “Scale dependence of species richness maps.” Professional Geographer. 46(3): 346-358.
USGS website. [2].
Last accessed December 3, 2006.Rey alejo 17:46, 6 December 2006 (UTC)[reply]

  • Comment: Sounds good. You also have academic references included, which is laudable. Perhaps something could be added in the initial paragraph that explains what gap analysis is used for these days in wildife conservation beyond the original methods described by Scott et al. and Jennings. Referring to a general ecology or conservation biology textbook might provide ideas with this regard (e.g., Groom et al.'s Principles of Conservation Biology (3rd ed.), Sunderland, MA: Sinauer, pp. 518-521). It might also be a good idea to bear in mind that the reader will be uninitiated in ecology/conservation biology, and so technical explanations may have to give way to clear and easy-to-understand narrative. An article for gap analysis (conservation) exists. Perhaps you can integrate your writing there. Cheers, AppleJuggler 06:42, 11 December 2006 (UTC)[reply]

External links[edit]

"Fly by" editor suggested link had been spammed "today" based on activity with other links. It had not, it had been there for a month. So I have reverted.

What does the "GAP" abrevation in GAP analysis stand for?[edit]

If there is anybody that can help me on this, it would be very nice, I think it is better to understand procedures that are abrevated, if I know the thoughts behind the abrevations. —The preceding unsigned comment was added by Mavur (talkcontribs) 13:32, 28 February 2007 (UTC).[reply]

In the context of this entry GAP isn't an abbreviation, it refers to the gap between where you are and where you want to be. I'm aware of another sense where GAP stands for Good/Average/Poor. Here you're looking at the outcomes of a process (e.g. a project or an annual goal) with a view to scoring the outcome.

For example, a sales manager has an annual goal to increase market share from it's current level of 20%. A 'Good' outcome might be 40+%, 'Average' 30-40% and 'Poor' might be less than 30%.

--Stephenbooth uk (talk) 17:09, 1 April 2008 (UTC)[reply]


Distribution Gap[edit]

This section has many problems:

1. It appears to be original research ,

2. It does not appear to have anything to do with this article,

3. It does not conform to Wikipedia style guidelines, and

4. It is not NPOV


Accordingly, I have removed it, and copied the text here. Perhaps someone (other than the contributor) can review it and decide if it belongs here, somewhere else in Wikipedia, or not at all. Robertwharvey 06:14, 24 June 2007 (UTC)[reply]


Original text follows:


Measuring the “Distribution Gap” Richard Sander, June 2005 Chambers et al. have argued in their critiques of Systemic Analysis that even in a race-neutral admissions regime, the credentials of black law students will inevitably lag far behind those of other students at any one school, simply because the overall credentials distribution of blacks is so much lower than that of whites. In other words, if a school admits students from certain credentials ban, and blacks are more numerous toward the bottom of that ban, even under a race-blind system admitted black students will face a significant credentials gap entering law school. I discuss this problem in my Reply to Critics and give a simple example with real data to illustrate why I believe this “distribution effect” is quite small (see notes 88-91 and accompanying text). In this supplement, I offer a more comprehensive example. As I argue in Part II of Systemic Analysis, admissions officers rely very heavily on the academic credentials of law students in deciding whom to admit. If one could translate the various factors used in these decisions to specific weights, I estimate that LSAT scores and undergraduate grades (UGPAs), which I call in combination the “academic index”, would make up something like 80-90% of the total decision (even before quality of undergraduate institution is taken into account). If I wish to simulate admissions decisions, then, I should come up with a process where I add applicant credentials to another, separately-determined number that represents the non-index portions of the applicant’s qualifications. That number should of course be given much less weight than the index. There is another aspect of the admissions process that one should like to simulate in a realistic model. Not all students attend the most elite school that admits them. The LSAC-BPS data has a good deal of information on these students who go to “second-choice” schools, and I write about this data extensively in Reply to Critics. I can estimate what proportion of students of different races choose a second-choice school, and also estimate how far down they moved in terms of school eliteness by going to a second-choice school. There are also students who are strong enough to get into a law school, but who are only interested in going to law school if they can be admitted to a relatively elite one. They may not be admitted to any school, even with high index scores, because they only applied to very difficult schools. These “overshooters” should also be taken into account. The simple model I present here incorporates all the features I have just discussed. It’s a preliminary version, but it provides a good sense of the “distribution effect.” The race-blind model works like this: 1) I take the entire pool of law school applicants from a given admissions year (I use 2001 here, since that is a relatively recent admissions cohort and one on which there is an unusual amount of information available, thanks to Linda Wightman’s 2003 article in the Journal of Legal Education.). I assign each student an academic index based on his LSAT and UGPA. 2) I eliminate those students in each index category who are “overshooters” – that is, who are not admitted to any law school. To determine the size of this group for the purposes of the model, I looked to the actual data to locate students who had credentials that placed them above others in the applicant pool, but who Ire still not admitted to any school. 3) I add to each student’s index score a random number that can vary, with equal probability, anywhere between -40 and +40, a total range of 80 points. This number is intended to approximate the “unobserved characteristics” of applicants (including quality of undergraduate institution), and give those characteristics the approximate weight, compared to the academic index, that admissions offices use. (Most applicants to individual law schools have indices that vary over a range of about 300 points, and the eighty point range constitutes about twenty-five percent of that total range, which provides one way of assessing the reasonableness of my assumption.) The new list of admission indices (academic index + random number) is then sorted from top to bottom. 4) The phenomenon of “second-choice” students is simulated by moving students down the list a number of slots from their original ranking. The number of students moved down, and the number of slots moved down, is calibrated from the LSAC-BPS “second-choice” data to approximate the actual second-choice behavior of each racial group. Note that the LSAC-BPS data comes from the cohort of students entering in 1991; we have no similar information about the 2001 cohort, so it is possible that the same patterns do not apply. But it seems certain that a meaningful minority of students in any cohort will not end up attending the most elite school that admits them, and a realistic simulation needs to account for that pattern. 5) Law schools are then sorted from most elite to least elite, based on a weighted average of the schools’ academic ranking (by other academics) and their student index scores. The first-year enrollment of each school is determined. 6) The top school’s first-year class is composed of the number of people from the top of the applicant list sufficient fill the class; then the second-ranked school’s first-year class is filled the same way, and so on. (This is perhaps the most unrealistic step in the process, since of course there is overlap among multiple schools in selecting students; more realistic models don’t produce very different results.) When I examine the first-year classes that result, they closely approximate the actual makeup of law school student bodies in the ways (other than race, which will of course be different under a race-blind system) that I can measure them – that is, both the average academic scores of students, and the degree of variation in scores (25th and 75th percentiles) are very highly correlated with real values. The results of this analysis show an average black-white gap in academic indices of 4.18 points. The gap varies, of course, from school to school: at fifty-six schools, the gap is negative (i.e., in favor of blacks), while at another forty-two schools, the gap is greater than ten points (for these schools, the gap is generally less than fifteen points). The following pages list the average index, and the black-white gap, at each school in my analysis. The particular black/white gaps at individual schools are largely random artifacts -- in my simulation, I assume that all schools pursue identical admissions policies. The average score levels for whites, in contrast, reflect the rankings of the schools, and correspond fairly closely with actual data on the academic makeup of student bodies. I will be posting more detailed simulations, and data that will enable other interested researchers to estimate a wide range of simulations, later this summer.

Businesses Category???[edit]

Please consider taking this page out of this category (perhaps the person thought it was the GAP clothing store) and put it instead in the Business Category or another that applies. Aug 12

Is there anybody watching?[edit]

Is there anyone watching this page? After a casual reading, I was going to correct the ill-formatted link to the Office of Government Commerce that was added by an anon recently, when I began to wonder why the sentence about the connection to Prince2 was relevant at all (it probably isn't). On checking the history, I noticed that the article lost a perfectly good paragraph last November. Once again, I have to conclude that Wikipedia isn't working anymore. -- Solipsist (talk) 19:38, 7 January 2010 (UTC)[reply]

Under the "Usage gap" heading, a sentence that says "Clearly two figures are needed[...]:" is followed by three things for which figures are needed. I don't know which is correct, but 2≠3. —Preceding unsigned comment added by 90.192.20.144 (talk) 19:11, 2 November 2010 (UTC)[reply]