2013 MFA Index: Further Reading

by
Seth Abramson
From the September/October 2012 issue of
Poets & Writers Magazine

Note: The following article explains the approach used to compile the 2013 MFA Index, published in the September/October 2012 print edition of Poets & Writers Magazine.

The 2013 MFA Index comprises two tables: one of 85 full-residency MFA programs and one of 29 low-residency MFA programs. Full-residency MFA programs are assessed on the basis of twenty measures: eight listings of survey results, and twelve listings of other important program features. While the eight surveys detailed in the full-residency table cannot be classified as scientific, all are predicated upon sufficient hard data to be substantially probative. A scientific survey of full- or low-residency creative writing MFA programs, or doctoral creative writing programs, is not presently possible, as more than half of the nation's full- and low-residency graduate creative writing programs do not publicly release the data necessary for such an assessment. 

Three of the eight full-residency MFA surveys are based on a survey of a large sample of current MFA applicants. In each survey, programs are ordered on the basis of the number of times they appeared on applicants’ application lists; the resulting numeric ordering therefore assesses only the popularity of individual programs among a large sample of well-researched aspiring poets and writers, and is not an assessment of overall program quality. While prospective MFA students have a wide range of resources at their disposal in selecting where to apply—including not only quantitative data but also programs’ promotional materials; first-hand interviews with program staff, faculty, and current and former students; large online communities of fellow applicants, many of which are devoted to MFA research, discussion, and application advising; their own past experiences in undergraduate and non-institutional workshop settings; the literary oeuvres of current faculty members and recent alumni; previous program assessments in various online and print media, including first-hand accounts from, and interviews with, current and former faculty and graduates; and in some instances program visits or prior first-hand experience workshopping with selected faculty members—there is necessarily some information that is impossible for prospective MFA students to access unless and until they become matriculants. Once a student has matriculated, however, they may develop subjective attachments to their alma mater, which makes objective assessment of their own experiences, as opposed to the provision of definitionally “self-serving” survey responses, difficult or even impossible; for this reason and others discussed in more detail, current MFA students are not asked to self-report on their programs, nor are they considered to have a broader, more accurate, or more timely knowledge of programs they do not attend than those unmatriculated applicants who are currently researching program options.

In the 2013 MFA Index, programs appearing in the top half of each applicant popularity survey are noted by their numeric placement in the survey results. Programs not appearing in the top half of the 2012 popularity survey or the four-year applicant popularity survey (taken between April 16, 2008 and April 15, 2012) may still appear in the full-residency table, provided they appear in the top half of three of the seven survey areas described below.

The five hard data–based survey columns in the full-residency table are as follows: selectivity, funding, student-faculty ratio, fellowship placement, and job placement. These categories appear in the table in the chronological order in which applicants to graduate creative writing programs encounter them: a program’s selectivity determines, all thing being equal, an individual applicant’s likelihood of securing admission; the funding category indicates, all things being equal, what aid package will be attached to that program admission if and when it comes; student-faculty ratio gestures toward, all things being equal, a student’s ability to access individual faculty members while in-program; fellowship placement generally measures the success of individual programs in placing recent graduates in post-MFA fellowships; job placement generally measures the success of individual programs in placing graduates who have already achieved some measure of fellowship/publishing success in full-time creative writing teaching positions in higher education .

These survey results are scientific to the extent that they order programs on the basis of quantitative data publicly released by the programs themselves, and unscientific to the extent that not every program has released data for every category of assessment. These five columns therefore constitute an ordering of all publicly known data rather than an ordering of all extant data. A full complement of funding and admissions data is available for well over half of the nation's full-residency MFA programs; the remaining programs are generally smaller, newer, lightly advertised, nondomestic, or regional in terms of their applicant base. As all of these programs have websites, however, and as all enjoy exclusive dominion over their online presence, the absence of any specific funding or selectivity data in these programs' online promotional materials is taken as an indication that these programs fully fund less than a third of their students and do not have an acceptance rate low enough for inclusion in the top half of programs in this category (currently, a program's yield-exclusive acceptance rate would need to be less than 11.1 percent for it to be included in the top half of the selectivity category). The full-residency table is based in part on the presumption that it would be counterintuitive for a program providing full funding to a substantial percentage of its student body to not indicate as much in its promotional materials. Program websites are regularly reviewed to determine whether a program has added information to its online profile.

Based on the data presently available, it is not anticipated that any of those programs without a full complement of funding and admissions data available in some form online would have appeared in the top half of full-residency programs in the funding category. These programs, given the incompleteness of their promotional materials, are also much less likely to attract sufficient applications to be eligible for the selectivity listing; a program must receive at least fifty applications annually to be considered eligible for this listing. As to the two placement-related surveys, these do not rely on programs' promotional materials or on their willingness to release internal data to individual applicants or groups of applicants, so all programs nationally, both full- and low-residency, are equally eligible for a top-half placement in the fellowship and job placement categories. Data sufficient to calculate student-faculty ratios for virtually every MFA program in the United States were also readily available.

The overlap between those programs appearing in the top half of the two fiction/poetry surveys—the third applicant survey being of nonfiction applicants only—and those programs appearing in the top half of the five hard-data survey results is significant. Of the 71 full-residency programs that appear in the top half of both the one-year and four-year survey results,  60 (85 percent) are also in the top half in selectivity, and two (3 percent) have no selectivity data available; 60 (85 percent) appear in the top half in the funding category; 47 (66 percent) appear in the top half in student-faculty ratio; 56 (79 percent) appear in the top half in fellowship placement; and 42 (59 percent) appear in the top half in fellowship placement. Of the 32 programs listed in the chart as placing in the top half of the nonfiction survey, 28 (88 percent) also appear in the top half of both the one-year and four-year (fiction/poetry) surveys, with another three (9 percent) appearing in the top half of one of the two. In a sign that applicant mores remain fairly consistent over time, of the 60 programs appearing in the top half of programs in the four-year applicant survey, 58 (97 percent) also appear in the top half of programs in the one-year survey.

The approach to compiling the 2012 MFA Index was devised with the following goals in mind: (1) To better inform applicants of their program options by offering the nation’s first complete listing of graduate creative writing programs; (2) to more widely disseminate hard data regarding objective but non-exhaustive/non-conclusory measures of program quality, which often play a role in applicants’ application and matriculation decisions; (3) to be responsive to, and reflective of, the opinions, interests, concerns, and values of current applicants to graduate creative writing programs; (4) to emphasize the importance of both inputs and outputs in assessing individual graduate creative writing programs; and (5) to enhance the transparency of the admissions process for present and future applicants to graduate creative writing programs.

Popularity Surveys

In the twelve months between April 16, 2011, and April 15, 2012, 540 full-residency MFA applicants were surveyed on The MFA Draft 2012 Group, hosted by Facebook.com. Created by a group of then-current MFA applicants in the early Spring of 2011, the group at its peak had nearly 1,500 members during the survey period, and featured dozens of new MFA-related conversation threads—some garnering up to a thousand individual responses—each day. The MFA Draft 2012 Group was created and is moderated by the applicants themselves; admission of new members was and is determined entirely by applicants; and decisions regarding the production of personal or program information for purposes of research were likewise made entirely by applicants. The site was promoted via several methods: Word-of-mouth; links and references in the MFA Draft 2011 Group, the predecessor to the MFA Draft 2012 Group (and there is also, now, a well-attended MFA Draft 2013 Group); and links and references on The Creative Writing MFA Blog founded by novelist Tom Kealey. The four-year survey is a compilation of the data from this one-year survey and the past three years of Poets & Writers Magazine program assessments.

Individual users in the MFA Draft 2012 Group were distinguished by their proper names, as reflected by the full names (first and last) associated with their Facebook accounts. Internet research was conducted to verify applicants’ identities as and when authenticity was in question. The document in which Draft 2012 applicants voluntarily compiled their application decisions was part of an application that tracks all changes to uploaded documents by editors’ full names/Facebook accounts. This application ensured that any and all changes to the document were traceable to specific users. Users were permitted to amend their application lists in real-time; though less than 15 percent of users elected to make amendments to their lists once posted, all changes to applicants’ lists were accounted for by both the one-year and four-year applicant surveys appearing in the program tables. Substantial additional security measures were taken to ensure the authenticity of compiled application lists.

As noted, surveyed applicants voluntarily listed the programs to which they had applied or intended to apply, and were permitted to adjust these lists at any point during the survey period.

Only recently-collected data is of use to applicants. The one-year full-residency survey uses no survey data predating April 16, 2011; the four-year full-residency survey uses no survey data predating April 16, 2008; the low-residency survey uses no data predating April 16, 2007; and the creative writing doctoral survey uses no data predating April 16, 2007. The longer survey periods for low-residency MFA and creative writing doctoral programs were necessitated by the significantly smaller applicant pools for these degrees.

Eligibility for Inclusion in the 2013 MFA Index

Eligibility for the 2013 MFA Index was determined by how programs placed in seven survey areas: popularity among applicants, selectivity, funding, student-faculty ratio, fellowship placement, job placement, and location. Programs are categorized by the number of survey areas in which they place in the top half of all programs a) eligible for the category, and b) with data publicly available. The number of programs considered eligible for each category of assessment varies by the category; in some instances (such as fellowship placement and job placement) newer programs cannot yet be added to the pool of eligible programs because they have yet to graduate any poets or writers, whereas in other instances (such as selectivity, funding, and student-faculty ratio) certain programs may not yet have released the data necessary for them to be assessed in these areas. Popularity and location are the only categories in which all presently extant programs (or in the case of the four-year applicant survey, all programs extant for a majority of the four-year survey period) were instantly members of the pool of eligible programs. Otherwise, the pool of eligible programs was 124 for the selectivity survey, 150 for the funding survey, 130 for the student-faculty ratio survey, 130 for the fellowship placement survey, and 90 for the job placement survey. The reason for the significantly lower number of eligible programs in the job placement category is that MFA graduates are generally not competitive for the sort of hires catalogued by the job placement assessment—full-time, non-adjunct positions teaching creative writing at the college/university level—until they have been away from their alma maters for at least five years, during which time they have continued to publish in their genre (and in many instances gain teaching experience as adjuncts) to make themselves more marketable as full-time hires.

Given that the job placement survey tracks hires made between 2009 and 2012, only programs known to have graduated at least two classes of poets and writers by 2004 are considered to have had a full opportunity to place in the job placement category, with the result being that the number of programs founded prior to fall 2001 was used as the “eligible pool” figure for this measure (but note that, in order not to disadvantage programs founded after this time, such programs were nevertheless included in the survey if they successfully placed a graduate in an eligible position). As 90 full-residency programs were founded prior to fall 2001, the size of the pool for the job placement measure was set at 90. In future years the size of the “eligible pool,” consistent with the above-stated methodology, will increase.

MFA Program Response Rates

Graduate degree programs in creative writing respond to assessment-related inquiries at a lower rate than do programs in almost any other field of study in the United States. The following response rates were reported for questionnaires sent pursuant to the 2012 U.S. News & World Report program assessments (in each instance, the field of study is followed by the response rate from all programs surveyed in the field): Engineering (98 percent); Business (91 percent); Criminology (90 percent); Education (90 percent); Medicine (84 percent); Healthcare Management (76 percent); Statistics (67 percent); Law (66 percent); Public Health (61 percent); Audiology (57 percent); Library and Information Studies (56 percent); Pharmacological Sciences (56 percent); Social Work (56 percent); Occupational Therapy (53 percent); Veterinary Medicine (48 percent); Nursing (47 percent); Computer Science (46 percent); Physician Assistance (45 percent); Sociology (43 percent); Speech-Language Pathology (42 percent); Public Affairs (40 percent); Rehabilitation Counseling (40 percent); Fine Arts (39 percent); Political Science (37 percent); Economics (34 percent); Mathematics (34 percent); Physical Therapy (33 percent); English (31 percent); Physics (31 percent); Earth Sciences (29 percent); Clinical Psychology (28 percent); Chemistry (25 percent); Psychology (25 percent); History (23 percent); and Biological Sciences (15 percent). Respondent institutions in each of these academic fields were aware that their questionnaire responses would not be kept confidential, and that their participation in surveys sponsored by U.S. News & World Report would result in publication of a substantial stock of program-specific data regarding each university queried.

Every two years, the Association of Writers and Writing programs (AWP) sends a questionnaire to programs in the field of creative writing—a field whose administrators and faculty are no longer surveyed (and whose programs are no longer assessed) by U.S. News & World Report—in much the same way U.S. News & World Report does for the thirty-five fields of study listed above. A crucial difference between the two questionnaires, however, is that the AWP questionnaire guarantees anonymity to its respondents; AWP releases no program-specific data or survey results pursuant to its biennial questionnaire. It is worth noting, too, that AWP estimates (as of its 2009 Annual Report) that 34 percent of programs in the field of creative writing have declined to become members of AWP. These programs are not subject to AWP questionnaires.

According to AWP’s publicly released summary of the program questionnaire it distributed in 2007, between 40 and 60 percent of AWP’s member programs declined to answer AWP’s queries regarding internal admissions and funding data. Specifically, 47 percent of programs declined to reveal how many assistantships they offered annually to incoming students, 61 percent declined to reveal the stipend offered to teaching assistants, 56 percent declined to reveal whether they offered a full-tuition waiver to teaching assistants, 49 percent declined to reveal how many scholarships were offered to incoming students, 55 percent declined to reveal their annual number of applicants, and 52 percent declined to reveal the size of their annual matriculating class. 

Please log in to continue.
LOG IN
Don’t yet have an account?
Register for a free account.
For access to premium content, become a P&W member today.

Comments

Many of the same flaws

I've so far put off commenting on this cosmetically altered version of the "rankings." So, apparently, have others. The people I've discussed this topic with have not based their decision not to respond on the judgment that they consider the problems with this barely-different "methodology" solved; they've based it on the question of whether or not we should ignore this enterprise altogether.

I decided I shouldn't. The rankings are based on so many logical and empircal flaws that it's important, I think, for someone to address them (and I'm hardly alone in this opinion). So I'm gonna add my many 2 cents in the next few weeks, when I have spare moments.

As I mentioned last year, my brother is a mathematical (as opposed to applied) statisticain--which means he also understands the applications of stats. Having already read half of the "methodology" (it's still so long), my brother raises the obvious question about sample size. Before I receved a response from him, I'd raised to him the quesiont of the Central Limit Theorem, and the question of when it does not apply. (I first encounted the theorem in stat 101.) It does not apply well to this kind of sampling. 

As I did early on last year, I won't read any of Seth's responses unless a friend tells me that I ought to because of some innacuracy he's made about my claims or other good detail I should consider. I've seen smart and fair questions raised in response to Mr. Abranson's claims: e.g., on (I believe) HTMLGiant, a woman raised the quite reasonable question that myabe it was "misleading" for Abramson to say that Iowa's program was the "best" MFA program long before any other program existed; after all, "better" and "best" imply that there's something to which the item in question can be compared. His response to her was, in my view, rude and unfair.

Besides, I'm not really writing to him anyway.

 

P.S.

P.S. Sorry for the typos in my first post! I find spell checks useful (though I never rely on grammar checks)--and for that reason, I miss the squiggly red line! (Those can catch clerical errors as well!)

The assumptions you cannot make in science

Bit by bit, I want to respond to several assumptions made by Mr. Abramson. What I'll say tonight:

He describes his respondennts as "well researched," yet he provides no empirical evidence whatesover to support this claim. Also, he states in his "methodology" that his enterprise isn't "scientific" because, he argues, not all programs have responded. The problem, however, is that even if every program WERE to provide all the data he's searching for, his "rankings" (or "index," or whatever P&W wants to call it this year) would STILL be unscientific, and here's part of the reaon that that's the case:

One cannot make assumptions about one's sample without supporting evidence; also, human subjects, when it comes to their OPINIONS or FEELINGS (as opposed to, say, tissue samples), are fraught with well known interpretive difficulties that go beyond those found in the typical study in the natural or physical sciences.

Anyone who fully understands the scientific method understands at least this much: Making unsupported assumptions about your sample is NOT THE WAY SERIOUS SCIENCE IS DONE.

Anyway, more later...

 

Oh, and I'll add...

He argues that surverying MFA graduates would be a create a biased sampling because graduates would tend to rank their own programs highly. Fair enough--at least in theory.

But his method of examining prospective applicants doesn't get rid of the problem of bias; it merely replaces one kind of bias with a set of other biases beyond funding: location, picking "easy" programs regarding admission, having a "connection" to a particular faculty member, etc., etc., etc.

Noticing such details isn't rocket science.

(Oh, by the way: I'm trying to figure out how to separate paragraphs with white space. It seemed easier last year!) 

 

I agree with Caterina

These rankings continue to be an absurd blemish on PW's otherwise superb support for the CW community. The whole debate seems very simple to me - the information is useful, so make it available. But ranking requires criteria, and no-one has yet come up with sensible and generally-applicabe criteria for ranking MFA programs. Seth Abramson's criteria might work for him, and that's great. But putting PW's name on Abramson's ranking is silly (almost as silly as prorating the number of MFA programs founded in the 2010s on the basis of the number founded in 'the first thirty months' of the decade). 

A few final (?) thoughts

I agree with you, TimO'M.

A few additional claims made by the polll's creator that I'd wanted to respond to, including statments made by him that I think of as "Sethisms--a term I don't mean as denigrating but use because Seth has made a number of claims I'd never heard elsewhere but that he presents as if they should be believed merely because he made them (unless he thinks they carry some other obvious force: e.g., that they're self-evident or were handed down by the MFA Goddess and transcribed by Seth):

1) That MFA programs provide a "nonprofessional, largely unmarketable degree..." The problem with this claim is that this used to not be the case, before the number of MFA programs mushroomed (I think there are too many MFA programs now, and I suspect that some of them were created as cash cows--little equipment required but good salaries for the faculty and cheap labor by those who do manage to get funding from them). Although most Harvard law grads probably do manage to find good-paying jobs in the profession, the same phenomenon, more or less, has happened with law schools--a professional and marketable degree, traditionally: http://www.nytimes.com/2011/01/09/business/09law.html?pagewanted=all.

In fact, numerous law professors and members of the American Bar Association have questioned the ethics of this phenomenon.

2) That teaching is a relatively unimportant component in the MFA experience. While Seth is welcome to his opinion on this matter, that's all it is: his opinion. I earned my MFA from a program that, according to Seth, is associated with high post-grad employment. Why did I choose to apply there, though? A) the quality of the alumni; and B) the quality of the writers on the faculty. Most others I knew who had applied to harder-to-get-into programs considered the same two factors.

Although being a good writer doesn't guarantee that one will be a good teacher, I've had only one writing teacher who excelled the former but not the latter. Most good writers are good readers. How can that help (enormously, I'll add) and MFA student? By being read by a nuanced reader who understands the art form--someone who isn't also in competition with you, by the way--you can learn what you're doing well and what you're not doing so well. (Many of us have witnessed or even experienced this phenomenon: where one student will make in workshop a humane and fair-minded criticism of another student's piece and then the latter will later say, as payback, something nasty about the former's work. it's childish but also human, and it's more likely to occur among peers.)

No precise scientific measure will be created for MFA rankings, and I suspect that's why Abramson treats, for example, the quality of the faculty as rather trivial. How would he be able to measuer the quality of the writers on the faculty? By awards won? Which awards? As imperfect as it was, I find the old US News & World Reports helpful in that a) a faculty respondent was unable to rank her own school and b) faculty, who often guest-teach at other programs, have an idea of where the better students tend to be studying. Any more "scientific" a ranking seems highly unlikely to me. 

3) That it's really one's classmates--peers--that determine the quality of one's experience in an MFA program. Again, Mr. Abramson is entitled to his opinion, but that's all it is. A talented poet, and perhaps the gentlest person in my class, left after the first year (of a four-year program) for what he said would just be a "leave." He never returned. One thing he told me before he left was that he'd found no "writing community" there. Others did. But let's face it, an MFA program can include a lot of back-biting among students. (A friend of mine who attended Iowa in the '80s said that a running joke there was that the Iowa Writer's Workshop kept the student counseling services plied with clients. Perhaps the environment there is more humane now. It's refreshing to see the current director publicly state that applicants she's stongly supported--based on their writing sample--have sometimes been, to her surprise, rejected by the rest of the faculty votes.)

4) This distinction between "studio" and "academic" MFA programs, terminology I hadn't encountered pre-Seth Abramson (though I'd done an enormous amount of research on programs before I applied). He's said that Iowa is one of the "least academic" programs. By what measure? That they don't give grades? I know someone who took, during his MFA program there, a seminar that included classical Greek thought and was taught by James Alan McPherson: Pulitzer winner, Guggneheim and MacArthur Prize winner, and graduate of Harvard Law School before he attended the IWW. (Ever read any of his essays, often known for their intellectual, as well as emotional, nuance?) Not an "academic" program? (In contrast, my more "academic" program focused on reading literature as an art form; no postmodernist/post-structuralist/cultural studies-based lit-crit was involved. Otherwise, I wouldn't have attended.)  

5) That the level of the writing of MFA students at Iowa (or similar programs) is exceptional (I wish I could find the reference to that--if I do, I'll include it)--another justification for the claim that teaching isn't all that important?

While, as an undergraduate, I was taking other kinds of courses at Iowa, I used to sneak to the bin of fiction submissions for workshop (but only after the workshop had met) and steal the one or two leftovers (I wasnted to write fiction but was also scared by the prospect). Some of the writing was exceptional. Some of it, though, was rough-hewn (it was a workshop, after all)--and occasionally it was relatively bad, even if the prose was pretty good. My friend once described to me Frank Conroy's response to such stories: "Beautiful prose in the service of what?" (I.E., where was the plot, the characterization, the conflict, the sensory detail...?) Yes, this is hearsay, but I've heard the same depiction from several other grads of the IWW.

Given all of these obvious questions in response to this "ranking" system, what is it that has convinced P&W to attach it's name to it and give it such exposure.

I want to raise one more matter (one I consider at least as important as the above concerns I expressed), but it's getting rather late, so I'll sign off for now.

My feline pal, Caterina (one of three cats I live with), thanks you on my behalf for your indulgence--assuming you've made it this far into my comments.

Oh, just caught:

Forgot the end-parentheses in the second sentence above ("Sethisms").

While I'm thinking of it...

On the positive side: The application numbers are being called “popularity,” as they should be.

On the less positive side: It appears the Seth has still failed to distinguish “selectivity” from “acceptance rate.” As a Yale University administrator, whom I quoted last year, pointed out, the quality of the applicant pool makes a huge difference. In other words, a program that has a 25 percent acceptance rate might be more selective that some schools with, say, 10 percent acceptance rates. (And I have no bone to pick here: According to Mr. Seth’s own measures, the program I finished has a 4-5% acceptance rate.)

 I’m of course, in the above references, talking about Columbia (and some of the other NYC schools). For whatever reasons, Columbia’s MFA program has been associated with an exceptionally large number of fine writers. Tom Keeley, and Seth Abramson, were correct in alerting MFA applicants to the reality that funding is more available at some schools than at others, and that some of those latter schools are incredibly expensive if you don’t get funding. But it seems that Mr. Seth categorizes such schools as moral transgressions, even though some students get funding from them. (And anyway, if you’re living in NYC and you’ve got the money...)

 I also wrote earlier about Seth’s distinction between “studio” and “academic” MFA programs in creative writing, a distinction that caught my attention because I’d never it anywhere when I applied to programs in the ‘90s—which is why I came to call such terms “Sethisms.”

 Again: I have a friend who, during his MFA program at Iowa, took a seminar under James Alan McPherson--who also has a Harvard Law degree--on early classical Western thought. How is that not “academic”?? And why should we conclude that artistry and intellect are mutually exclusive? Since when? The idea that they're deeply different is a fairly recent distinction in the West.

 And one more time: In my own four-year program, we didn’t study Derrida or Foucault, etc., etc... So is that "academic" or not?

Oh, and I’ll add for good measure: I think Jorie Graham is, at least in her later work, a fantastically bad poet. Iowa (IWW) is lucky to be rid of her. And if we’re talkin’ intellectual stuff: Graham’s stupidly irrelevant references to obscure Latin botanical terms and to quantum theory say one thing she seems to want others to believe about her above all other possibilities: “I’m really really really really smarter than you!!

(And I'll later post a small bit about Columbia.)