The Top Fifty MFA Programs in the United States: A Comprehensive Guide

by
Seth Abramson
From the November/December 2009 issue of
Poets & Writers Magazine

One other category of consideration excluded from these rankings is long-term alumni success. In the past, articles have been written observing that, ipso facto, a strong program must graduate strong writers. This may be the case, but it is not necessarily so. Most programs concede in their promotional literature that they cannot teach talent, only (at best) craft; consequently, most programs know better than to take direct credit for graduate successes that may occur many months or even years after a student has left the program. More important, though, there is no viable method for measuring alumni success. There are simply too many tenure-track teaching positions, spots at writers colonies, book prizes, and miscellaneous writing-world laurels: To poll any appreciable percentage of the institutions offering such accolades for the biographies of their recipients—particularly when such biographical data is scarce online—would be impossible. Even if available, the use of such information would be limited. Does the success, in 2009, of a writer or poet who graduated from an MFA program in 1996 tell us anything about the present quality of that program? Given faculty turnover, and the other hard-to-quantify factors that inform a program's success or failure with respect to any one student, it seems unlikely—that is, if terms like success and failure are even appropriate or relevant at all. Likewise, and beyond the impossibility of linking any one achievement to any one period of instruction, how do we competently control for or weigh such factors as size of graduating class, degrees of achievement, and when an individual's MFA study took place? The only postgraduate assessment considered in this ranking is the determination of which programs have the most success (controlled for program size) in placing graduates in the few highly regarded, short-term post-MFA fellowships that exist. As the most pressing issue for graduating poets and writers is generally setting up a postgraduation employment plan, prospective applicants are likely to seriously consider what fellowship placement statistics say about cohort quality and program reputation.

Because there are 140 full-residency MFA programs in the United States, any school whose numerical ranking is in the top fifty in any of the ranked categories should be considered exceptional in that category.

So what is measured by these rankings, and how has the data for these measures been compiled? The most important element in the table that follows is a poll taken of more than five hundred current and prospective MFA applicants between October 2008 and April 2009. This survey was conducted on two of the largest online communities for MFA applicants, the Suburban Ecstasies and the MFA Weblog, and it differentiated among applicants on the basis of information they supplied for their individual user accounts for these communities. The data was also subsequently reviewed to remove the rare duplicate entry or multiple response. All poll respondents were asked to list, along with their genre of interest, either the programs to which they planned to apply, or, if they were not yet applicants but expected to be in the future, which programs they believed were the strongest in the nation. Finally, data from the 2008-2009 application season was compared with data from the preceding two application cycles to spot any significant unexplained deviations; fortunately, there were none. While certain programs have ascended in the rankings and certain others have descended over the past three years this poll has been conducted, the most dramatic movements can be linked to, variously, the hiring of new faculty, the creation of new programs at highly regarded universities (currently, an average of six new programs are founded each year), significant amendments to program funding packages, and improvements to the transparency of programs' online promotional materials.

While the response to this poll from applicants and the MFA programs themselves has been overwhelmingly positive, what few criticisms have emerged generally run along one of two lines: that the poll "merely" measures the popularity of any program among current and prospective applicants, and that such individuals are not, in any case, the best arbiters of program quality, having not yet experienced either the benefits or the shortcomings of any program. These concerns have been addressed in myriad forums online over the past three years, but, generally speaking, the most succinct answer to these charges is that the 2009 poll, as well as the two previous iterations of the poll, does not measure the sort of subjective, highly individualized assessments current and former students of the various MFA programs can supply. Nor does the poll rest on the view, once taken by U.S. News & World Report, that MFA faculties know better than their students or applicants which programs are the most esteemed. Neither MFA faculties nor current or former students of the programs themselves are tasked with determining the current state of affairs in the field of creative writing MFA programs; this is the unique province, and the special task, of current applicants. MFA faculties are not paid to follow the minute, year-to-year details of the scores of full-residency MFA programs in the United States, nor is there any particular reason for them to do so, as they are, first and foremost, working writers. Current and former MFA students likewise are to be considered expert only in their own program's particularities, and with regard to those particularities they are not especially good respondents for polls because of the significant possibility of observer bias. Applicants, in contrast, are far more likely to have no particular horse in the field, and to have acknowledged the importance of the matriculation decision to their own futures by rigorously researching a wide variety of programs.

Some may wonder why these rankings do not address MA programs in English that offer creative writing concentrations, low-residency MFA programs, or creative writing PhD programs. Apart from the fact that the time and resources available for this rankings project were necessarily finite, the applicant pools for these other types of programs are much smaller than the one for full-residency MFAs and therefore are extremely difficult to sample accurately. Moreover, low-residency programs in particular are not amenable to the same type of categorical assessment as full-residency programs: Generally speaking, low-residency programs do not offer much if any financial aid, cannot offer teaching opportunities to students, employ highly tailored Internet-based pedagogies and instructional schemes, are less likely to be gauged on the basis of their locales (as applicants only spend the briefest of periods on campus), and, because their faculties are part-time, are more likely to feature star-studded faculty rosters. It would be unfair to these programs, and to their full-residency counterparts, to attempt a straight comparison between the two groups. These same types of concerns also exist, to a varying extent, with non-MFA creative writing degrees. For instance, MA degrees in creative writing (or in English with a creative writing concentration or creative thesis) are not terminal degrees, and so are structured as much to prepare students for future doctoral study as for immediate immersion in the national creative writing community.

Please log in to continue.
LOG IN
Don’t yet have an account?
Register for a free account.

Comments

Low-residency programs

Seth Abramson wrote: "Generally speaking, low-residency programs do not offer much if any financial aid, cannot offer teaching opportunities to students,...are less likely to be gauged on the basis of their locales (as applicants only spend the briefest of periods on campus), and, because their faculties are part-time, are more likely to feature star-studded faculty rosters."

Given that hundreds, surely thousands, of people DO apply to low-residency programs each year, doesn't that suggest that many of the qualities measured in these rankings are unimportant to a significant number of students? And what is the basis for asserting that low-residency faculties are more star-studded than others? Even if it were true, how would it matter?

Finally, don't rankings merely offer a lazy short cut to school selection, perpetuating the myth that some programs are inherently better than others, when prospective students would benefit most by finding the program that is best suited to their individual aims and needs? You may not intentionally provide these rankings as a template for school selection, but you can bet that many people will foolishly use them that way, just as people use the US News & World Report rankings.

Re:

Hi Jelhai,

You're absolutely right that the hundreds (not thousands; the national total is under 2,000) of aspiring poets and fiction-writers who apply to low-residency programs annually are, generally speaking, a very different demographic than those who apply to full-residency programs: they tend to be older, they are more likely to be married and/or have children, they are more likely to be professionals (i.e. have a career rather than a job), they are more likely to be (only relatively speaking) financially stable, they are more likely to have strong personal, financial, or logistical ties to their current location (hence the decision to apply to low-res programs, which require minimal travel and no moving). That's the reason this article did not contemplate low-res programs, in additional to the reasons already stated in the article. So when the article makes claims about MFA applicants, yes, it is referring to full-residency MFA applicants. Assessing low-residency programs and their applicants would be an entirely different project, requiring a different assessment rubric as well as--as the article implicitly acknowledges--a different series of first principles about applicant values.

As to the rankings that are here, keep in mind that what you're seeing is an abbreviated version. The full version, available either in the upcoming print edition or as an e-book (available for purchase on this site), includes data categories for each school: duration, size, funding scheme, cost of living, teaching load, curriculum focus (studio or academic). These are some of the most important "individual aims and needs" the hundreds and hundreds of MFA applicants I've spoken with over the past three years have referenced. Indeed, I've even done polling (the first-ever polling of its kind) to ask applicants what they value most in making their matriculation decision: in a recent poll of 325 MFA applicants (where applicants could list more than one top choice), 59% said funding was most important, 44% said reputation (e.g. ranking) was most important, 34% said location, 19% said faculty, and much smaller percentages said "curriculum" and "selectivity."

These rankings (and the article above) specifically urge applicants to make their own decisions about location, but provide ample information about funding, reputation, curriculum, and selectivity--four of applicants' top six matriculation considerations. Needless to say, many applicants will have "individual aims and needs" that they need to consider in making their matriculation decision, and I always urge them to look to those needs with the same fervor they consider (as they do) funding, reputation, location, and so on. But to imply these rankings haven't done the necessary footwork to ask applicants what their primary aims and needs are is simply incorrect. In fact, in the poll referenced above applicants were given the opportunity to vote for "none of the above"--meaning, they were invited to say that their top consideration in choosing a school was something other than the six categories referenced above. Only 1% of poll respondents chose this option. So when we speak casually of "individual aims and needs," I think we need to remember that these aims and needs are no longer as unknowable as they once were--largely due to efforts like the one that produced these rankings. And again, for those who don't see their own aims and needs reflected in the data chart that accompanies this ranking (and which you haven't seen yet), I say--as I always say--that these rankings and this data should be used only as a starting point for making an intensely personal and particularized decision.

Take care,
Seth

Re:

P.S. I should say, too, that the poll I mentioned above is just one of many. Another poll (of 371 applicants, where applicants could pick more than one first choice), showed that 57% of applicants have as their top "aim" getting funded "time to write," 42% say employability (i.e. the degree itself), 36% say mentoring (which causes them to primarily consider program size, as program size helps determine student-to-faculty ratio), 34% say "community" (which again causes applicants to consider program size, though it pushes many of these applicants to consider larger programs, i.e. larger communities), 19% say "the credential" (again, as represented by the degree itself, though this also pushes such applicants to favor shorter programs, with a lower time-to-degree), and much smaller percentages said that they wanted an MFA to validate themselves as writers or to avoid full-time employment (very similar to wanting "time to write," per the above, just as "validation" is intimately related to "mentoring" and "the credential"). Again, these polls were not intended to be exhaustive, though it's noteworthy that 0% of poll respondents chose "none of the above."

Suspicious

A graduate of Harvard Law School and the Iowa Writers' Workshop
I'm not accusing anyone of anything, but you have to realize how suspicious this looks.

Re:

Hi Clairels,

I'd respond to your comment, but honestly I have absolutely no idea what you mean to imply or what your concern is. I attended both those programs (J.D., 2001; M.F.A. 2009), and certainly don't regret either experience.

Take care,
S.

P.S. I think it was the

P.S. I think it was the reference to HLS that threw me. If you're talking about my IWW affiliation (as I now see you might be), I don't know what to tell you except to say that you won't find a single person who's well-versed in the field of creative writing who's surprised by Iowa's placement in the poll--a poll that was taken publicly and with full transparency, and whose results are echoed in/by the 2007 poll, the 2008 poll, the (ongoing) 2011 poll, USNWR's 1996 poll, and the 2007 MFA research conducted by The Atlantic. Iowa has been regarded as the top MFA program in the United States since the Roosevelt Administration (1936). In three years of running MFA polls I'll say that I think you're the first person to suggest to me (even indirectly) that Iowa might have finished first in the poll for any reason other than that it finished first in the poll (to no one's surprise). So no, I can't say that I see my affiliation with the IWW--an affiliation I share with thousands of poets (Iowa graduates 250 poets every decade) is "suspicious." --S.

To be fair, Seth, I think

To be fair, Seth, I think it's important to point out that Iowa was the _only_ MFA program in 1936. So of course it was regarded as the top program then--among one. (Not arguing about anything else, promise; the way that fact was dropped just seemed potentially misleading.)

Hi SJ, Sorry for any

Hi SJ,

Sorry for any confusion--my point was not that Iowa is considered the top program because it was founded in 1936, but rather that it has been considered the top program since 1936. It's had that position (across years of various polls, rankings, and articles) for many reasons, and certainly longevity has played a part--more alumni mean more alumni "successes," for instance, particularly during those decades in which there was less competition with other MFA programs--but it's obviously much more complicated than a simple matter of temporality or history. It's worth noting that these rankings are among the first (if not the first) rankings to not implicitly or explicitly indicate that the IWW is the top program in all respects. That is, the IWW is not ranked even in the top 20 here in annual funding, total funding, or selectivity. To the extent these rankings were directly inspired by research done by Tom Kealey in 2005 (and I think Kealey's implied rankings were indeed the first to point out Iowa's varying strengths in different areas), I do consider this ranking to have treated the IWW in a considerably less sacrosanct manner than previous rankings.

Best,

Seth

Acceptance Rates

How exactly were the "selective rates" for each school determined on this list?

Hi JTL, Per my contract

Hi JTL,

Per my contract with P&W (as I'm sure you can understand) most of the MFA data has been temporarily removed from my website under a non-compete clause (which I find wholly reasonable). It'll go back up on the site at the end of February 2010. In the meantime, if you have a specific question about data not available from P&W, e-mail me and I'll do my best to answer your question. Best wishes,

Be well,

Seth

"Sorry for any confusion--my

"Sorry for any confusion--my point was not that Iowa is considered the top program because it was founded in 1936, but rather that it has been considered the top program since 1936. " What? This was exactly my point. It's essentially meaningless to claim that IWW was the "top" program since 1936 during the decades when it was the only program.

Hi Phoebe, I've addressed

Hi Phoebe,

I've addressed our miscommunication elsewhere (i.e. we're dialoging elsewhere), so I won't get into much more detail here. To this point (and accepting that one day this may change), Iowa has shown that it is popularly considered the top program no matter how many competitors it has: 141 full-res competitors, as now, or only a handful, as it had in the 1950s. As to the period between 1936 and (say) 1950, a) Iowa obviously must get credit for originating the idea of the graduate creative writing program (as this both indicates it would have been regarded as the top program even if had had competition, but also indicates that even in the absence of competition Iowa does get credit for being the "top" program both because it empirically was and because it deserved to be for having thought of an idea others did not), and b) the more than a half-century since 1950 shows that Iowa's reputation is unaffected by how much or how little competition it has. So to the extent you're suggesting Iowa was only the top program from (say) 1936 to 1950 because it was the only program--which bare fact itself is true--I think that's actually more potentially misleading than anything I wrote, as we have ample evidence to suggest something inherent in the Iowa model (and the mere fact that the originator of an idea has a special cache as to that idea for many years afterward) would have made Iowa #1 from 1936 to 1950 no matter what. So I don't agree with you that Iowa's reputation from 1936 to 1950 is "meaningless."

Be well,

Seth

question about collection of information

Just wondering how you collected information about funding and about acceptance rates, and success of graduates, which are not always available on promotional materials. I would have gladly shared this information from the program that I direct, but I was not consulted.

Hi SSTG, This was one of

Hi SSTG,

This was one of several philosophical dilemmas the rankings were forced to confront over the past three years (others of which I will be writing on in more detail in the near future, in response to several queries about the rankings' methodology). Specifically, the question of how to collect data presented, among many others, several issues relevant to your comment/query: 1) Even when the AWP, the organization that officially represents MFA programs nationally, has asked programs for data (e.g. in 2007), only around 50% have decided to reply (and for some crucial, highly-specific queries, only 35% have agreed to provide responses); 2) Consequently, any direct request made to the programs by a freelance journalist would, we can safely assume, a) be met with silence at a rate much greater than 50% (as P&W does not have as much of a direct claim on MFA programs' administrative compliance, understandably, as AWP does), and b) privilege especially those programs that responded to the said journalist's request, when in fact the programs' private "responses" would do nothing to make this information directly and transparently available to applicants on a daily basis--the purpose of all educational rankings being, ultimately, to achieve just this sort of transparency of vital data for those most in need of it. In other words, to any program (and this is by no means directed particularly at you, SSTG) that says it would/would have volunteer(ed) such information if asked, I respond: 1) Did you provide this information to (as it were) your "parent" organization when it was requested of you in 2007?; 2) Do you regularly provide this information to applicants when they request it by phone, mail, e-mail, or in person?; 3) If you have this information and are willing to part with it, why does it not appear anywhere on your program website, whose content you control absolutely and in real-time, and whose fundamental premise is to answer those questions most central to your applicant constituency (among whom, you surely know, funding and selectivity perhaps rank one and two as items of interest)?

As indicated in the methodology article, the rankings were intended to privilege transparency--thus, program data that had been made transparently available was privileged both for ranking purposes and for long-term data-collection purposes. Consequently, funding and admissions data was compiled via a number of sources, including (variously) direct interviews with current applicants who had been successful in gathering this data upon request, data provided in widely-disseminated letters of acceptance or rejection provided by programs to applicants, individual program websites, (in some instances) data provided directly by the programs to researchers (if that program represented that it was in the process of adding such information to its website and subsequently did so), public statements of MFA faculty members or administrators on high-traffic MFA-applicant blogs/communities or major media outlets, and so on. In other words, programs that made an effort to disseminate the information publicly and widely were acknowledged as having done so via the many methods available for such transmission -- with program websites being the most common of these -- instead of an individual researcher making a private phone call and getting told intimate program data on a case-by-case basis only.

Whereas many instances of research and ranking are philosophy-neutral, this ranking was--transparently, and this has been the case since January of 2007 (when the research effort started)--philosophy-inflected. It might appeal to my own ego to know that programs are personally willing to part with information if I (and only I) ask for it, but that does no good whatsoever to the thousands of applicants who are not finding this data on program websites when they should, or (just as commonly, I'm afraid) are being refused this data when they request it or (in some rare instances) lied to about what the data is and/or its availability. On some level I wonder why a ranking should (or how it ethically can) acknowledge any data a program didn't see fit to offer, with equal alacrity, publicly and to its most important consumers--who are paying application fees for the privilege of being consumers--with the sort of transparency and broad dissemination that suggests the program is indeed willing to be accountable for that data.

I strongly believe MFA directors are near-universally well-intended; that said, when applicants have been crying for funding and admissions data, in the thousands, for more than a decade now in massive online communities MFA programs are aware of (and in many cases monitor), the claim that applicants had only to ask for this obviously-relevant data to then immediately receive it somehow rings false. While the consequence of using a philosophically-inflected research method is that the resultant data is (as the article, rankings, and widely-written-on research environment freely concedes) a) unscientific, and b) probative rather than conclusive, neither a scientific nor conclusive study of a target population can be conducted when--even under the most ideal conditions (cf. AWP's survey in 2007)--around half of the target population withholds the very data required by such a proposed methodology. To be candid, then, the question you've asked me is far better addressed to your fellow MFA directors, as only one of two possibilities can be readily envisioned: 1) your fellow directors did not want their efforts assessed (ever) and/or the resultant findings published (ever), and thus withheld the data (for years or decades) that would have made such an assessment possible, or 2) your fellow directors heartily (as you've indicated is the case for you yourself) welcomed such an analysis, and simply inexplicably failed to press the button on their PC keyboards which would have uploaded this absolutely foundational program data to their program's long-standing public website. In the article, then, when I refer to a "virtuous circle," this is what I'm referring to--the manner in which a ranking system can make programs more accountable to their prospective students, more transparent to the public, and (felicitously) convince programs to finally take measures to which, speaking in retrospect, they now claim to have been totally and without reservation amenable.

This is a brief answer to a question I hope to address in greater detail in the future, so I hope you won't mistake my candor here for sarcasm--to the extent I seem somewhat bewildered by any program's presently-declared willingness to provide information never before provided (but within the program's power to have immediately and publicly provided 24/7/365 for years), that bewilderment is not a put on, or intended as smarmy and performative, but sincere and abiding.

Best wishes,

Seth

nothing is absolutley objective

Although I respect the fact that Mr. Abramson wants to find a more objective means of ranking MFA programs, it's inaccurate and unprofessional to assert that "none of the data used for the rankings that follow was subjective." The way in which a person presents data always slants the interpretation of it. Sociolinguists have been studying this for years. So Mr. Abramson's claim that these rankings are based on objective data misleads the reader into believing this was a research study. It's not. As a PhD researcher, I have 20 years of experience collecting, analyzing, and interpreting data, and I've authored numerous published papers. I didn't read anything that resembled a traditional "methods" section, which is tediously detailed, concrete, and explicit. Without such a description of how the data is gathered and tabulated, it's impossible to evaluate it. I understand that Mr. Abramson does not have a vested interest in the outcome of the rankings, and he's assuming that the respondents don't either. Even if we are comfortable with the second assumption, we are left wondering how the respondents were chosen, how many of those contacted then chose to participate, how representative are these respondents of the larger group, were any statistics run? Were there statistically significant differences between programs? Even statistics can be subjectively employed and reported, but they are more objective than asking a few people what they think and then presenting those results as definitive. It is absolutely possible that these rankings represent some core of truth, but it's also possible that they don't. Without an explicit description of the methodology, it's impossible for readers to evaluate this. Unfortunately, in the meantime, the assertion that these rankings have been objectively determined will mislead impressionable, young writers interested in pursing a MFA. It will potentially influence their application decisions and possibly damage the reputation of programs NOT in the top-50.

Hi ewjunc, The article's

Hi ewjunc, The article's reference to subjectivity was specifically targeted toward the history of rankings in this field, as well as the history of this particular research. The element of the data chart which is comprised of applicant polling is unscientific, has always been presented as such, and has never in three years claimed (online or in print) to be a scientific research study. Such a study would be impossible for two reasons (among others): 1) the programs are by and large withholding the data that would be necessary for a scientific research study; 2) scientific polling could likewise only be done with the assistance of the programs, which have thus far refused (at a rate greater than 50%, per AWP's 2007 survey) to respond to data queries from even their own professional organization. So when I spoke of "subjectivity" in the article, then, it was a reference to two things: 1) the most recent attempt at a ranking of programs, done in 2007 by Atlantic journo Ed Delaney (which Delaney himself said was not a "ranking," but which was undoubtedly "subjective" inasmuch as Delaney admitted his own opinions were given weight in calculating results); 2) the fact that many have referred to the TSE research, in the past, as "my" rankings, i.e. as a reflection of my own subjective opinion of which programs are best. They are not any such thing. The funding, selectivity, and placement rankings are based on hard data only. The polling is based singularly on the opinions of others, and is not inflected by my own opinions. You are certainly correct to say that every ranking system ever devised is--in one sense at least--subjective, because someone, somewhere, had to create its methodology. I rather think that would make the word "subjective" lose all meaning; indeed, if all rankings are by definition subjective, there's no possibility anyone would confuse anything I wrote for suggesting otherwise, as "objective" rankings (definitionally, per you) are impossible. I'm confident most readers took me to mean that these rankings do not reflect the subjective opinion of their author. But to say that these rankings "present [their] results as definitive," when the article accompanying the rankings explicitly disavows any such totalizing quality, is I think unfair. Likewise, to speak of statistical research of the doctoral sort is to make two mistakes: 1) it ignores the fact that creating a ranking methodology is a much more complex question than merely one of statistics (purpose, audience, impact, and many other pragmatic considerations come into play); 2) it presumes that we are in some sort of laboratory environment, and not one in which the test subjects have essentially, at a greater than 60% clip, fled the building and taken all their data with them. To repeat: A scientific study of MFA programs will be impossible until some organization or entity obligates these programs to compile and release such data publicly. Until that time comes, any research on MFA programs will be unscientific--though, as you concede is possible with these rankings (and as I believe with certainty is the case) they are probative. As to damaging the reputations of programs not in the top 50, I can only tell you that what I'm hearing from both applicants and the programs themselves is quite different from what you're describing: inasmuch as these rankings are the first ever comprehensive listing of domestic, full-residency MFA programs (don't forget that you can find programs #53 through #142 online, many of which have achieved a top 50 ranking in individual categories, and are noted for this), applicants are telling me that they're discovering new programs they never would have known about otherwise, and countless smaller programs are reporting, since the time the TSE research was initiated in 2007, significant increases in the size of their applicant pools. Be well, Seth

Hi Seth, Good job. Have a

Hi Seth, Good job. Have a question for you. Are these rankings endorsed by Tom Kealey? It seems that you guys have been buddy-buddy in recent years, working together for the 2nd edition of the MFA Handbook, but if I remember correctly, Tom argued in the first edition that Iowa is not even in the top ten of his list. Has his view changed since then? Are these rankings going to be included in the new edition of his MFA handbook? Just curious! Kevin

Hi there Kevin, thanks so

Hi there Kevin, thanks so much for your comment. I think you'd have to ask Tom directly to get his thoughts on this; I know he endorses MFA research generally, and in the past he's endorsed (indeed many times) my research in particular, but as to these specific rankings I've not asked his opinion so I couldn't tell you (though to the extent the methodology used here was the same as that which was used for the rankings Tom published [and I compiled] in the 2008 edition of the Handbook, I think many will, understandably, presume some form of tacit or explicit endorsement on Tom's part). With respect to Iowa's (approximately) #11 placement in Tom's rankings of five years ago, I believe (though again you'd have to check in with him on this) that a primary reason for that ranking was Tom's impression, at the time, that Iowa had a tiered funding system. In fact it does not, and in roughly 2005 it had just switched from a tiered to a fully-funded system -- but at that point, so close to the switch-over, Tom had been given bad information by someone who was only familiar with Iowa's old funding system under Frank Conroy (IWW Director, 1987-2005). Under Sam Chang all Iowa students are fully funded, and I'm fairly well convinced (knowing Tom) that had he had this information 4-5 years ago he would have put Iowa in the top ten nationally, though where in that top ten they would have gone (in his estimation) is -- again -- a question for Tom to answer, I think. Keep in mind that in the Second Edition of the Handbook (2008), Iowa is actually ranked #1 overall, top 25 in funding, and Top Tier (i.e. top 12) in the national comprehensive tiered rankings -- this last of which was the only ranking that included my own subjective opinions, but which I know Tom had indicated to me seemed very much in line with his own thinking. As to whether the rankings will appear in the next edition of the Handbook, I haven't spoken to Tom about this, though certainly if I do end up involved with that project, as I was in the last one, I'll have some input into that decision. Of course, if and when 2011 MFA rankings are released (and that's still something I haven't decided), a lot of things will need to be updated, including the Handbook. Again, thanks for the note! Be well, --S.

Re: UNH Core Faculty--include Mekeel McBride, please!

Most people who have studied at poetry at UNH have studied with poet & professor Mekeel McBride-- often more than once (or twice or three times or...). Not only is she core, she is remarkable. Please correct this omission from the University of New Hampshire/MFA/Core Faculty listing.

Other factors: health insurance

Since Seth Abramson is considering cost of living and funding, I think he should consider another, really huge factor: Does the school offer health insurance? There are some very highly ranked CUNY programs. Yes, CUNY is cheap, but there is no health insurance. If you really want to commit to a writing program, you don't really have time for a full-time job with health benefits. Health insurance was a big factor in my selection, and I'm sure it is for many others as well.