In a comment left on my last post (which is not quite the hot topic it was, but is still simmering), Deb said,
I wonder whether the "research" web sites might reconsider their dichotomous listings of publishers. I.e., a house is either listed as "not recommended" or no warning of any kind is listed. Example: "a publisher".
Perhaps a rating system would be more apropos. Pubs who have had legitimate, verifiable complaints against them, in a certain narrow range such as: breach of contract, nonpayment of royalties, failure to distribute, etc., might result in a "D" grade, whereas a publisher without complaints would merit an "A".
A system such as this would certainly be more work. However, it would warn authors off from the well-intentioned non-scam pubs who haven't performed as they've promised.
Scams, of course, would merit a big fat razzberry "F."
Although Writer Beware doesn't list or recommend publishers, the issue of a rating system is an interesting one that has come up before, so I thought I'd address it in a blog post.
I have a number of problems with the idea of grading or rating publishers (or agents), and I think it would be difficult, if not impossible, to come up with a reasonably objective rating system that would be helpful to writers--and wouldn't become an insupportable headache for the rater.
- Documentation is straightforward, but complaints can be hard to assess. One might possibly be able to come up with an objective system of grading based on documented problems--nonpayment, poor contract terms, breach of contract. But coming up with a system to grade authors' complaints would be a lot harder.
"My publisher owed me X and didn't pay" is reasonably straightforward, but "My publisher sent me a nasty email when I asked a question on an email loop" is more subjective, even if many authors say the same thing. As we've seen again and again, nastiness and harassment are all too frequently the last refuge of a failing micropress publisher. But what if the publisher is nasty in private, but otherwise does a good job of getting its books out? I have some examples of this in my files. Would the publisher get a "D" for author relations, a "B-plus" for publishing, or some complicated grade in between? (My head is already starting to hurt.)
There's also the question of context. Two serious complaints may indicate a problem publisher--or they may be a fluke. Certainly there are publishers in my files that I wouldn't hesitate slap an "F" rating on--but there are many more about which I have enough complaints and/or documentation to suggest that caution may be in order, but not enough information to feel confident about giving the publisher a rating or a grade.
Another issue--how do you rate author unhappiness, which may be a sign of real problems with the publisher, but also may reflect unrealistic expectations on the part of the author? Does the mere fact that complaints exist dictate a lower rating, or are there complaints that one can safely ignore? For instance, I regularly get emails from writers who are indignant that AuthorHouse or a similar self-publishing service did nothing to market their book. That's not a problem with the self-pub service; it's a problem with the author's expectations. I've also heard from authors who are angry with otherwise problem-free micropresses for similar reasons. You all know my personal opinion of most micropresses--but is this really complaint-worthy? If you choose to publish with a micropress, you have to accept that it isn't going to do what a commercial publisher will. Shouldn't the author have done enough research at the outset to know what he or she was getting into?
In applying a rating system (if you did it right), you'd have to compare and contrast and weigh all these factors. Not only would this be difficult and time-consuming (and subjective--see below), it might not be especially helpful for writers, unless you explained the factors that went into each grade. Again, that's time-consuming. I'm not going to whine about watchdogs being volunteers who do the work in their spare time--but I just can't see this as the best use of limited volunteer hours.
- Complaint collection is serendipitous. The watchdog groups have to depend on authors who are having problems to come to us. We can't be sure they will, even where there's a really bad situation. (For instance, yet another micropress is currently in the process of imploding, but not one of its authors has contacted Writer Beware. My knowledge of the problems is second-hand, from blogs and message boards.)
So no complaints about a publisher might mean the publisher is great--or it might just mean we haven't heard anything bad. A mere absence of complaints, therefore, doesn't mean the publisher deserves a good grade.
Nor would a rating system eliminate the problem of publishers without notations. Even if we rate the publishers we do have information and documentation about, there will always be a large number of publishers about which we have no information or documentation at all, and thus can't give a grade to.
One size does not fit all. Different publishers have different specialties and focuses. They also have different cultures and different expectations of their authors. An "A" publisher for one author will not be an "A" publisher for another, even if both authors write in the same genre.
I'm also concerned that writers, who are always eager for a shortcut, might use ratings as an excuse not to do proper research. (I know, I know. Many are going to do that anyway. But why encourage it?)
No matter how objective in their intent, ratings systems are created and applied by human beings, and are thus, in the end, subjective. Nonpayment of royalties or contract breaches, when documented, are obviously problems deserving of a poor grade. But if I created the system and did the rating, I might give a publisher a "D" because it had no distribution beyond the Internet and its owner had no previous publishing experience--even if there were no author complaints and the publisher had a decent contract. Someone else doing the rating, however, might feel that inexperience and POD distribution only pushed the publisher down to a "B," especially if the publisher demonstrated good intent and was trying hard. I disagree--but hey, that's my bias. I know a lot of people feel differently.
Remember the currently imploding micropress that I mentioned above? (Don't worry, I'll blog about it soon). I'd have given it an "F" to from Day One, due to a combination of factors: the owner's total lack of any relevant professional background, a seriously nonstandard contract, no distribution, horrid amateurish book covers, and various other evidences of nonprofessionalism. Nonetheless, other observers were willing to give this publisher a chance, based on its expressed willingness to learn and change, and its reported intent to develop distribution, make its books returnable, etc. These observers might have given the publisher a "C" or even a "B."
Could the publisher have wised up, made changes, and succeeded? Sure, in which case my "F" would have been mistaken. I've definitely been wrong about these things before. But in this case, I wasn't--which means that someone else's "C" would have been less than helpful to writers. (Which, of course, raises the issue of competing rating systems. I don't even want to think about how confusing that might be.)
Bottom line: a rating system is only as reliable as the biases of the rater.
A rating system would be in constant dispute. Look at the shitstorm that has been stirred by my previous post about something indisputably factual: Light Sword Publishing's recent loss of an author lawsuit. Imagine the shitstorm that would result from publisher ratings. Raters would be bombarded not just by publishers that didn't like their "Fs", but by publishers wanting to argue that their "Cs" really ought to be a "B-pluses." I can imagine a situation in which a rater had to spend as much time defending his or her ratings as collecting information or disseminating warnings. Again, not a good use of volunteer time.
All other issues aside, a rating or grading system would just be too much of a headache to sustain.
For all these reasons, I think it's more helpful for watchdog groups and research sites simply to collect and disseminate information, without attempting to rate it. Writers can then factor the information into their research, as part of the process of making up their own minds.
No comments:
Post a Comment