The Top Fifty MFA Programs in the United States: A Comprehensive Guide

by
Seth Abramson
From the November/December 2009 issue of
Poets & Writers Magazine

For the latest rankings of the top fifty MFA programs in creative writing, read "2011 MFA Rankings: The Top Fifty." For a ranking of low-residency programs, read "2011 MFA Rankings: The Top Ten Low-Residency Programs."

None of the data used for the rankings that follow was subjective, nor were any of the specific categories devised and employed for the rankings based on factors particular to any individual applicant.

The following is an excerpt of an article that appeared in the November/December 2009 issue of Poets & Writers Magazine.  The print article, and its accompanying rankings, include eight categories of additional data for each program, including size, duration, cost of living, teaching load, and curriculum focus.

"When U.S. News & World Report last gathered original data about graduate creative writing programs, in 1996, it did so based on two erroneous assumptions. First, it presumed that no part of the writing community was better equipped to assess the relative strengths of the country's then three-score MFA programs than the faculties of the programs themselves. In fact, there was rather more evidence to suggest that no part of the community was less suited to opine on this topic than the one selected. MFA faculties are by definition composed of working writers for whom teaching is an important but often secondary pursuit; likewise, faculty members, because they are primarily focused on writing and teaching within their own programs, have no particular impetus to understand the broader landscape of graduate creative writing programs.

A second major flaw—among many smaller ones—in the USNWR approach was the premise that, unlike every other field of graduate education, graduate study in creative writing was singularly resistant to quantitative analysis, and that therefore the only category of assessment worthy of exploration was faculty opinion on individual programs' "reputations." In fact, every graduate creative writing program has (somewhere) a documented acceptance rate, an annual if changeable funding scheme, and a whole host of less weighty but equally quantifiable data points: student-to-faculty ratio, matriculating-class size, credit-distribution prerequisites, local cost of living, and so on. USNWR ignored all of these.

Irrespective of the approach taken by USNWR, the evils of educational rankings are indeed legion and do urge caution on the part of any prospective analyst of MFA programs. At base it is impossible to quantify or predict the experience any one MFA candidate will have at any one program. By and large, students find that their experiences are circumscribed by entirely unforeseeable circumstances: They befriend a fellow writer; they unexpectedly discover a mentor; they come to live in a town or city that, previously foreign, becomes as dear to them as home. No ranking ought to pretend to establish the absolute truth about program quality, and in keeping with that maxim the rankings that follow have no such pretensions. When I first began compiling data for comprehensive MFA rankings, nearly three years ago, I regularly told the many MFA applicants I corresponded with that educational rankings should only constitute a minor part of their application and matriculation decisions; that's a piece of advice I still routinely give, even as the creative writing MFA rankings I helped promulgate have become the most viewed and most utilized rankings in the field—read online by thousands of prospective MFA applicants every month.

None of the data used for the rankings that follow was subjective, nor were any of the specific categories devised and employed for the rankings based on factors particular to any individual applicant. Location, for instance, cannot be quantified—some applicants prefer warm climates, some cold; some prefer cities, some college towns; and so on—and so it forms no part of the assessment. Other factors traditionally viewed as vital to assessing MFA programs have likewise been excluded. For instance, conventional wisdom has been for many years that a program may be best assessed on the basis of its faculty. The new wisdom holds that applicants are well advised to seek out current and former students of a program to get as much anecdotal information about its faculty as possible, but, in the absence of such information, one must be careful not to confuse a writer's artistic merit with merit as a professor. In the past, too many applicants have staked years of their lives on the fact that the work of this writer or that one appealed to them more than others, only to find that the great writers are not always the great teachers, and vice versa. Likewise, mentoring relationships are difficult to form under even the best of circumstances, particularly because neither faculty member nor incoming student knows the other's personality and temperament in advance. In short, determining whose poetry and fiction and memoir publications you most enjoy yields little information about whose workshops and one-on-one meetings you will find most instructive and inspirational.

Please log in to continue.
LOG IN
Don’t yet have an account?
Register for a free account.

Comments

Low-residency programs

Seth Abramson wrote: "Generally speaking, low-residency programs do not offer much if any financial aid, cannot offer teaching opportunities to students,...are less likely to be gauged on the basis of their locales (as applicants only spend the briefest of periods on campus), and, because their faculties are part-time, are more likely to feature star-studded faculty rosters."

Given that hundreds, surely thousands, of people DO apply to low-residency programs each year, doesn't that suggest that many of the qualities measured in these rankings are unimportant to a significant number of students? And what is the basis for asserting that low-residency faculties are more star-studded than others? Even if it were true, how would it matter?

Finally, don't rankings merely offer a lazy short cut to school selection, perpetuating the myth that some programs are inherently better than others, when prospective students would benefit most by finding the program that is best suited to their individual aims and needs? You may not intentionally provide these rankings as a template for school selection, but you can bet that many people will foolishly use them that way, just as people use the US News & World Report rankings.

Re:

Hi Jelhai,

You're absolutely right that the hundreds (not thousands; the national total is under 2,000) of aspiring poets and fiction-writers who apply to low-residency programs annually are, generally speaking, a very different demographic than those who apply to full-residency programs: they tend to be older, they are more likely to be married and/or have children, they are more likely to be professionals (i.e. have a career rather than a job), they are more likely to be (only relatively speaking) financially stable, they are more likely to have strong personal, financial, or logistical ties to their current location (hence the decision to apply to low-res programs, which require minimal travel and no moving). That's the reason this article did not contemplate low-res programs, in additional to the reasons already stated in the article. So when the article makes claims about MFA applicants, yes, it is referring to full-residency MFA applicants. Assessing low-residency programs and their applicants would be an entirely different project, requiring a different assessment rubric as well as--as the article implicitly acknowledges--a different series of first principles about applicant values.

As to the rankings that are here, keep in mind that what you're seeing is an abbreviated version. The full version, available either in the upcoming print edition or as an e-book (available for purchase on this site), includes data categories for each school: duration, size, funding scheme, cost of living, teaching load, curriculum focus (studio or academic). These are some of the most important "individual aims and needs" the hundreds and hundreds of MFA applicants I've spoken with over the past three years have referenced. Indeed, I've even done polling (the first-ever polling of its kind) to ask applicants what they value most in making their matriculation decision: in a recent poll of 325 MFA applicants (where applicants could list more than one top choice), 59% said funding was most important, 44% said reputation (e.g. ranking) was most important, 34% said location, 19% said faculty, and much smaller percentages said "curriculum" and "selectivity."

These rankings (and the article above) specifically urge applicants to make their own decisions about location, but provide ample information about funding, reputation, curriculum, and selectivity--four of applicants' top six matriculation considerations. Needless to say, many applicants will have "individual aims and needs" that they need to consider in making their matriculation decision, and I always urge them to look to those needs with the same fervor they consider (as they do) funding, reputation, location, and so on. But to imply these rankings haven't done the necessary footwork to ask applicants what their primary aims and needs are is simply incorrect. In fact, in the poll referenced above applicants were given the opportunity to vote for "none of the above"--meaning, they were invited to say that their top consideration in choosing a school was something other than the six categories referenced above. Only 1% of poll respondents chose this option. So when we speak casually of "individual aims and needs," I think we need to remember that these aims and needs are no longer as unknowable as they once were--largely due to efforts like the one that produced these rankings. And again, for those who don't see their own aims and needs reflected in the data chart that accompanies this ranking (and which you haven't seen yet), I say--as I always say--that these rankings and this data should be used only as a starting point for making an intensely personal and particularized decision.

Take care,
Seth

Re:

P.S. I should say, too, that the poll I mentioned above is just one of many. Another poll (of 371 applicants, where applicants could pick more than one first choice), showed that 57% of applicants have as their top "aim" getting funded "time to write," 42% say employability (i.e. the degree itself), 36% say mentoring (which causes them to primarily consider program size, as program size helps determine student-to-faculty ratio), 34% say "community" (which again causes applicants to consider program size, though it pushes many of these applicants to consider larger programs, i.e. larger communities), 19% say "the credential" (again, as represented by the degree itself, though this also pushes such applicants to favor shorter programs, with a lower time-to-degree), and much smaller percentages said that they wanted an MFA to validate themselves as writers or to avoid full-time employment (very similar to wanting "time to write," per the above, just as "validation" is intimately related to "mentoring" and "the credential"). Again, these polls were not intended to be exhaustive, though it's noteworthy that 0% of poll respondents chose "none of the above."

Suspicious

A graduate of Harvard Law School and the Iowa Writers' Workshop
I'm not accusing anyone of anything, but you have to realize how suspicious this looks.

Re:

Hi Clairels,

I'd respond to your comment, but honestly I have absolutely no idea what you mean to imply or what your concern is. I attended both those programs (J.D., 2001; M.F.A. 2009), and certainly don't regret either experience.

Take care,
S.

P.S. I think it was the

P.S. I think it was the reference to HLS that threw me. If you're talking about my IWW affiliation (as I now see you might be), I don't know what to tell you except to say that you won't find a single person who's well-versed in the field of creative writing who's surprised by Iowa's placement in the poll--a poll that was taken publicly and with full transparency, and whose results are echoed in/by the 2007 poll, the 2008 poll, the (ongoing) 2011 poll, USNWR's 1996 poll, and the 2007 MFA research conducted by The Atlantic. Iowa has been regarded as the top MFA program in the United States since the Roosevelt Administration (1936). In three years of running MFA polls I'll say that I think you're the first person to suggest to me (even indirectly) that Iowa might have finished first in the poll for any reason other than that it finished first in the poll (to no one's surprise). So no, I can't say that I see my affiliation with the IWW--an affiliation I share with thousands of poets (Iowa graduates 250 poets every decade) is "suspicious." --S.

To be fair, Seth, I think

To be fair, Seth, I think it's important to point out that Iowa was the _only_ MFA program in 1936. So of course it was regarded as the top program then--among one. (Not arguing about anything else, promise; the way that fact was dropped just seemed potentially misleading.)

Hi SJ, Sorry for any

Hi SJ,

Sorry for any confusion--my point was not that Iowa is considered the top program because it was founded in 1936, but rather that it has been considered the top program since 1936. It's had that position (across years of various polls, rankings, and articles) for many reasons, and certainly longevity has played a part--more alumni mean more alumni "successes," for instance, particularly during those decades in which there was less competition with other MFA programs--but it's obviously much more complicated than a simple matter of temporality or history. It's worth noting that these rankings are among the first (if not the first) rankings to not implicitly or explicitly indicate that the IWW is the top program in all respects. That is, the IWW is not ranked even in the top 20 here in annual funding, total funding, or selectivity. To the extent these rankings were directly inspired by research done by Tom Kealey in 2005 (and I think Kealey's implied rankings were indeed the first to point out Iowa's varying strengths in different areas), I do consider this ranking to have treated the IWW in a considerably less sacrosanct manner than previous rankings.

Best,

Seth

Acceptance Rates

How exactly were the "selective rates" for each school determined on this list?

Hi JTL, Per my contract

Hi JTL,

Per my contract with P&W (as I'm sure you can understand) most of the MFA data has been temporarily removed from my website under a non-compete clause (which I find wholly reasonable). It'll go back up on the site at the end of February 2010. In the meantime, if you have a specific question about data not available from P&W, e-mail me and I'll do my best to answer your question. Best wishes,

Be well,

Seth

"Sorry for any confusion--my

"Sorry for any confusion--my point was not that Iowa is considered the top program because it was founded in 1936, but rather that it has been considered the top program since 1936. " What? This was exactly my point. It's essentially meaningless to claim that IWW was the "top" program since 1936 during the decades when it was the only program.

Hi Phoebe, I've addressed

Hi Phoebe,

I've addressed our miscommunication elsewhere (i.e. we're dialoging elsewhere), so I won't get into much more detail here. To this point (and accepting that one day this may change), Iowa has shown that it is popularly considered the top program no matter how many competitors it has: 141 full-res competitors, as now, or only a handful, as it had in the 1950s. As to the period between 1936 and (say) 1950, a) Iowa obviously must get credit for originating the idea of the graduate creative writing program (as this both indicates it would have been regarded as the top program even if had had competition, but also indicates that even in the absence of competition Iowa does get credit for being the "top" program both because it empirically was and because it deserved to be for having thought of an idea others did not), and b) the more than a half-century since 1950 shows that Iowa's reputation is unaffected by how much or how little competition it has. So to the extent you're suggesting Iowa was only the top program from (say) 1936 to 1950 because it was the only program--which bare fact itself is true--I think that's actually more potentially misleading than anything I wrote, as we have ample evidence to suggest something inherent in the Iowa model (and the mere fact that the originator of an idea has a special cache as to that idea for many years afterward) would have made Iowa #1 from 1936 to 1950 no matter what. So I don't agree with you that Iowa's reputation from 1936 to 1950 is "meaningless."

Be well,

Seth

question about collection of information

Just wondering how you collected information about funding and about acceptance rates, and success of graduates, which are not always available on promotional materials. I would have gladly shared this information from the program that I direct, but I was not consulted.

Hi SSTG, This was one of

Hi SSTG,

This was one of several philosophical dilemmas the rankings were forced to confront over the past three years (others of which I will be writing on in more detail in the near future, in response to several queries about the rankings' methodology). Specifically, the question of how to collect data presented, among many others, several issues relevant to your comment/query: 1) Even when the AWP, the organization that officially represents MFA programs nationally, has asked programs for data (e.g. in 2007), only around 50% have decided to reply (and for some crucial, highly-specific queries, only 35% have agreed to provide responses); 2) Consequently, any direct request made to the programs by a freelance journalist would, we can safely assume, a) be met with silence at a rate much greater than 50% (as P&W does not have as much of a direct claim on MFA programs' administrative compliance, understandably, as AWP does), and b) privilege especially those programs that responded to the said journalist's request, when in fact the programs' private "responses" would do nothing to make this information directly and transparently available to applicants on a daily basis--the purpose of all educational rankings being, ultimately, to achieve just this sort of transparency of vital data for those most in need of it. In other words, to any program (and this is by no means directed particularly at you, SSTG) that says it would/would have volunteer(ed) such information if asked, I respond: 1) Did you provide this information to (as it were) your "parent" organization when it was requested of you in 2007?; 2) Do you regularly provide this information to applicants when they request it by phone, mail, e-mail, or in person?; 3) If you have this information and are willing to part with it, why does it not appear anywhere on your program website, whose content you control absolutely and in real-time, and whose fundamental premise is to answer those questions most central to your applicant constituency (among whom, you surely know, funding and selectivity perhaps rank one and two as items of interest)?

As indicated in the methodology article, the rankings were intended to privilege transparency--thus, program data that had been made transparently available was privileged both for ranking purposes and for long-term data-collection purposes. Consequently, funding and admissions data was compiled via a number of sources, including (variously) direct interviews with current applicants who had been successful in gathering this data upon request, data provided in widely-disseminated letters of acceptance or rejection provided by programs to applicants, individual program websites, (in some instances) data provided directly by the programs to researchers (if that program represented that it was in the process of adding such information to its website and subsequently did so), public statements of MFA faculty members or administrators on high-traffic MFA-applicant blogs/communities or major media outlets, and so on. In other words, programs that made an effort to disseminate the information publicly and widely were acknowledged as having done so via the many methods available for such transmission -- with program websites being the most common of these -- instead of an individual researcher making a private phone call and getting told intimate program data on a case-by-case basis only.

Whereas many instances of research and ranking are philosophy-neutral, this ranking was--transparently, and this has been the case since January of 2007 (when the research effort started)--philosophy-inflected. It might appeal to my own ego to know that programs are personally willing to part with information if I (and only I) ask for it, but that does no good whatsoever to the thousands of applicants who are not finding this data on program websites when they should, or (just as commonly, I'm afraid) are being refused this data when they request it or (in some rare instances) lied to about what the data is and/or its availability. On some level I wonder why a ranking should (or how it ethically can) acknowledge any data a program didn't see fit to offer, with equal alacrity, publicly and to its most important consumers--who are paying application fees for the privilege of being consumers--with the sort of transparency and broad dissemination that suggests the program is indeed willing to be accountable for that data.

I strongly believe MFA directors are near-universally well-intended; that said, when applicants have been crying for funding and admissions data, in the thousands, for more than a decade now in massive online communities MFA programs are aware of (and in many cases monitor), the claim that applicants had only to ask for this obviously-relevant data to then immediately receive it somehow rings false. While the consequence of using a philosophically-inflected research method is that the resultant data is (as the article, rankings, and widely-written-on research environment freely concedes) a) unscientific, and b) probative rather than conclusive, neither a scientific nor conclusive study of a target population can be conducted when--even under the most ideal conditions (cf. AWP's survey in 2007)--around half of the target population withholds the very data required by such a proposed methodology. To be candid, then, the question you've asked me is far better addressed to your fellow MFA directors, as only one of two possibilities can be readily envisioned: 1) your fellow directors did not want their efforts assessed (ever) and/or the resultant findings published (ever), and thus withheld the data (for years or decades) that would have made such an assessment possible, or 2) your fellow directors heartily (as you've indicated is the case for you yourself) welcomed such an analysis, and simply inexplicably failed to press the button on their PC keyboards which would have uploaded this absolutely foundational program data to their program's long-standing public website. In the article, then, when I refer to a "virtuous circle," this is what I'm referring to--the manner in which a ranking system can make programs more accountable to their prospective students, more transparent to the public, and (felicitously) convince programs to finally take measures to which, speaking in retrospect, they now claim to have been totally and without reservation amenable.

This is a brief answer to a question I hope to address in greater detail in the future, so I hope you won't mistake my candor here for sarcasm--to the extent I seem somewhat bewildered by any program's presently-declared willingness to provide information never before provided (but within the program's power to have immediately and publicly provided 24/7/365 for years), that bewilderment is not a put on, or intended as smarmy and performative, but sincere and abiding.

Best wishes,

Seth

nothing is absolutley objective

Although I respect the fact that Mr. Abramson wants to find a more objective means of ranking MFA programs, it's inaccurate and unprofessional to assert that "none of the data used for the rankings that follow was subjective." The way in which a person presents data always slants the interpretation of it. Sociolinguists have been studying this for years. So Mr. Abramson's claim that these rankings are based on objective data misleads the reader into believing this was a research study. It's not. As a PhD researcher, I have 20 years of experience collecting, analyzing, and interpreting data, and I've authored numerous published papers. I didn't read anything that resembled a traditional "methods" section, which is tediously detailed, concrete, and explicit. Without such a description of how the data is gathered and tabulated, it's impossible to evaluate it. I understand that Mr. Abramson does not have a vested interest in the outcome of the rankings, and he's assuming that the respondents don't either. Even if we are comfortable with the second assumption, we are left wondering how the respondents were chosen, how many of those contacted then chose to participate, how representative are these respondents of the larger group, were any statistics run? Were there statistically significant differences between programs? Even statistics can be subjectively employed and reported, but they are more objective than asking a few people what they think and then presenting those results as definitive. It is absolutely possible that these rankings represent some core of truth, but it's also possible that they don't. Without an explicit description of the methodology, it's impossible for readers to evaluate this. Unfortunately, in the meantime, the assertion that these rankings have been objectively determined will mislead impressionable, young writers interested in pursing a MFA. It will potentially influence their application decisions and possibly damage the reputation of programs NOT in the top-50.

Hi ewjunc, The article's

Hi ewjunc, The article's reference to subjectivity was specifically targeted toward the history of rankings in this field, as well as the history of this particular research. The element of the data chart which is comprised of applicant polling is unscientific, has always been presented as such, and has never in three years claimed (online or in print) to be a scientific research study. Such a study would be impossible for two reasons (among others): 1) the programs are by and large withholding the data that would be necessary for a scientific research study; 2) scientific polling could likewise only be done with the assistance of the programs, which have thus far refused (at a rate greater than 50%, per AWP's 2007 survey) to respond to data queries from even their own professional organization. So when I spoke of "subjectivity" in the article, then, it was a reference to two things: 1) the most recent attempt at a ranking of programs, done in 2007 by Atlantic journo Ed Delaney (which Delaney himself said was not a "ranking," but which was undoubtedly "subjective" inasmuch as Delaney admitted his own opinions were given weight in calculating results); 2) the fact that many have referred to the TSE research, in the past, as "my" rankings, i.e. as a reflection of my own subjective opinion of which programs are best. They are not any such thing. The funding, selectivity, and placement rankings are based on hard data only. The polling is based singularly on the opinions of others, and is not inflected by my own opinions. You are certainly correct to say that every ranking system ever devised is--in one sense at least--subjective, because someone, somewhere, had to create its methodology. I rather think that would make the word "subjective" lose all meaning; indeed, if all rankings are by definition subjective, there's no possibility anyone would confuse anything I wrote for suggesting otherwise, as "objective" rankings (definitionally, per you) are impossible. I'm confident most readers took me to mean that these rankings do not reflect the subjective opinion of their author. But to say that these rankings "present [their] results as definitive," when the article accompanying the rankings explicitly disavows any such totalizing quality, is I think unfair. Likewise, to speak of statistical research of the doctoral sort is to make two mistakes: 1) it ignores the fact that creating a ranking methodology is a much more complex question than merely one of statistics (purpose, audience, impact, and many other pragmatic considerations come into play); 2) it presumes that we are in some sort of laboratory environment, and not one in which the test subjects have essentially, at a greater than 60% clip, fled the building and taken all their data with them. To repeat: A scientific study of MFA programs will be impossible until some organization or entity obligates these programs to compile and release such data publicly. Until that time comes, any research on MFA programs will be unscientific--though, as you concede is possible with these rankings (and as I believe with certainty is the case) they are probative. As to damaging the reputations of programs not in the top 50, I can only tell you that what I'm hearing from both applicants and the programs themselves is quite different from what you're describing: inasmuch as these rankings are the first ever comprehensive listing of domestic, full-residency MFA programs (don't forget that you can find programs #53 through #142 online, many of which have achieved a top 50 ranking in individual categories, and are noted for this), applicants are telling me that they're discovering new programs they never would have known about otherwise, and countless smaller programs are reporting, since the time the TSE research was initiated in 2007, significant increases in the size of their applicant pools. Be well, Seth

Hi Seth, Good job. Have a

Hi Seth, Good job. Have a question for you. Are these rankings endorsed by Tom Kealey? It seems that you guys have been buddy-buddy in recent years, working together for the 2nd edition of the MFA Handbook, but if I remember correctly, Tom argued in the first edition that Iowa is not even in the top ten of his list. Has his view changed since then? Are these rankings going to be included in the new edition of his MFA handbook? Just curious! Kevin

Hi there Kevin, thanks so

Hi there Kevin, thanks so much for your comment. I think you'd have to ask Tom directly to get his thoughts on this; I know he endorses MFA research generally, and in the past he's endorsed (indeed many times) my research in particular, but as to these specific rankings I've not asked his opinion so I couldn't tell you (though to the extent the methodology used here was the same as that which was used for the rankings Tom published [and I compiled] in the 2008 edition of the Handbook, I think many will, understandably, presume some form of tacit or explicit endorsement on Tom's part). With respect to Iowa's (approximately) #11 placement in Tom's rankings of five years ago, I believe (though again you'd have to check in with him on this) that a primary reason for that ranking was Tom's impression, at the time, that Iowa had a tiered funding system. In fact it does not, and in roughly 2005 it had just switched from a tiered to a fully-funded system -- but at that point, so close to the switch-over, Tom had been given bad information by someone who was only familiar with Iowa's old funding system under Frank Conroy (IWW Director, 1987-2005). Under Sam Chang all Iowa students are fully funded, and I'm fairly well convinced (knowing Tom) that had he had this information 4-5 years ago he would have put Iowa in the top ten nationally, though where in that top ten they would have gone (in his estimation) is -- again -- a question for Tom to answer, I think. Keep in mind that in the Second Edition of the Handbook (2008), Iowa is actually ranked #1 overall, top 25 in funding, and Top Tier (i.e. top 12) in the national comprehensive tiered rankings -- this last of which was the only ranking that included my own subjective opinions, but which I know Tom had indicated to me seemed very much in line with his own thinking. As to whether the rankings will appear in the next edition of the Handbook, I haven't spoken to Tom about this, though certainly if I do end up involved with that project, as I was in the last one, I'll have some input into that decision. Of course, if and when 2011 MFA rankings are released (and that's still something I haven't decided), a lot of things will need to be updated, including the Handbook. Again, thanks for the note! Be well, --S.

Re: UNH Core Faculty--include Mekeel McBride, please!

Most people who have studied at poetry at UNH have studied with poet & professor Mekeel McBride-- often more than once (or twice or three times or...). Not only is she core, she is remarkable. Please correct this omission from the University of New Hampshire/MFA/Core Faculty listing.

Other factors: health insurance

Since Seth Abramson is considering cost of living and funding, I think he should consider another, really huge factor: Does the school offer health insurance? There are some very highly ranked CUNY programs. Yes, CUNY is cheap, but there is no health insurance. If you really want to commit to a writing program, you don't really have time for a full-time job with health benefits. Health insurance was a big factor in my selection, and I'm sure it is for many others as well.