Maximization of Internet Citations Methodology
What makes a computer journal important? To answer this question, we suggest an approach based on a forensic expert witness testimony and computer litigation support -- an approach that uses the relative number of citations of a search engine as the evaluation criteria.
This standpoint may be relevant for authors deciding which publisher they should submit their articles to first, especially if they are seeking tenure at an academic institution. It may be especially relevant for authors who are also expert witnesses or are interested in becoming expert witnesses. Likewise, it may be relevant for lawyers, judges and mediators who have to select expert witnesses, and are not familiar with the domain of expertise. How can outsiders evaluate quickly, effectively and objectively the quality of the authors, articles and the publishers of the expert's journals? This will help evaluate the expert's legal credibility.
This study uses Internet search engines and the relative number of citations as a measure of quality, applying it to Journal X and Journal Y. It concludes that Journal X is the more important journal based on this approach, as it will be typically done in a court of law. This study rejects the null hypothesis that there is no significant difference among the number of citations of search engines among different journals, using the same authors, the same number of authors, and the same year of publications.
Additional details, programs and illustrations have been omitted to save space and will be provided from the authors upon written request.
Evaluating the quality of a journal publisher is many times a controversial issue. This is especially the case as it pertains to the promotion and tenure decisions of faculty members. For that reason, many academic articles have been written about this subject. Yet, these articles may not address some issues when applied to environments outside the academic community. One such example is from the perspective of a forensic expert witness testimony. In such an environment, the traditional academic approach may not be the best and only approach. It may be too theoretical or too esoteric. It may take too much time to conduct these academic studies, leading to out-of-date results. What can be done to supplement the traditional academic methods of comparing the quality of different journal publishers, authors and articles, especially for lay people who do not know the domain, cannot invest a lot of time in studying it, and need to get objective, immediate and up-to-date answers? Our response is using the Internet search engines to compare articles.
Such approaches may supplement the traditional methods published by academic researchers. Users of such methods could be the courts, lawyers, jurors, mediators and editors who must evaluate journal publishers without much background in the expertise of journalism or similar fields. Most professional editors are experts in their domains, such as business, engineering, medicine, with very little background in journalism.
Researchers Rank & Rate Journal Publishers for Academic Promotions
The page at http://www.isworld.org/csaunders/rankings.htm reports that "The purpose of this page is to provide information about evaluations concerning the quality of MIS, POM and management journals. These evaluations may prove helpful to you in preparing promotion and/or tenure packets. In addition to the standard question of how many, there, hopefully is some concern at your university about how good your publications are. Since professionals in disciplines other than yours evaluate tenure and promotion, the rankings of journals in your discipline often serve as surrogate measures of quality. If you have published in top-ranked journals, be sure to highlight the ranking of those journals in your promotion and tenure packets."
Researchers Rank and Rate Journal Y Better Than Journal X
Are researcher rankings and ratings always applicable to non-researcher audiences? Our answer is no. Our reason is that one method of evaluation does not always apply everywhere. Alternatively, one size does not fit all. Or, different strokes apply for different folks. How is a forensic expert witness testimony and computer litigation support approach different? Because it does not deal with researchers; it deals with juries, judges, lawyers, and experts.
Different Evaluation Methods for Different Applications
Rushinek, S. and Rushinek, A. developed "An Application Model for Computer Assisted Design and Manufacturing Using a Product Evaluation and Selection System," for Computers & Industrial Engineering, (1987). Rushinek, A. and Rushinek, S. use the same word "evaluation" yet the "Control Through Standard Costs and Variance Analysis for Performance Evaluation," for Economic Planning (1990), is a different method and approach due to the differences in the environment of the application. The same phenomena becomes apparent as we look at another Rushinek, A. and Rushinek, S. evaluation approach in the "Accounting Software Evaluation: Hardware, Audit Trails, Backup, Error Recovery and Security," published in the Managerial Auditing Journal, (1995). Again, we see a completely different approach to the process of evaluation, due to different circumstances.
Search Engines and the World Wide Web: Comparing Journal Publisher Quality of Opposing Expert Witnesses
Similarly, the way to evaluate journal publishers for a forensic expert witness testimony and computer litigation support approach should be based on a simple test that any juror, lawyer or judge could perform within five seconds at any place, at any time, with the use of a browser and the World Wide Web. The present study takes this approach. In a typical computer-related litigation of a computer vendor and a computer user; the computer user, usually the plaintiff, claims to be unhappy with defendant's (the vendor) service or product. The plaintiff usually demands a refund plus damages, and the defendant refuses to satisfy the user's demands. The court (lawyer and judges) invites experts on both sides to testify on behalf of the plaintiff and defendant. Eventually, the opposing experts will disagree, and it will become a matter of credibility of the experts. So, the question is which expert is more credible?
One of the ways that experts can establish their credibility is by their publication record. At this point, it becomes a contest of which expert's articles are better, and which method will be most effective at convincing the court? Would it be the consensus of researchers or is there a simpler and better way to impress the court. Our answer is yes, there is definitely a better way to impress the court other then confusing them with theoretical academic arguments. There is a reason for the saying "this is just an academic exercise" implying that it is not a practical exercise. For the court, we definitely want to use a practical approach. Such an approach to evaluate an article is to use the common search engine and comparing the number of citations on the WWW. By simply searching the author's name and the title of articles of the opposing experts, the court can quickly and objectively evaluate the most recent relative quality of the journal articles and publishers of the opposing experts.
A Typical Case: Plaintiff Wants His Money Back and Defendant Refuses To Pay Back
In a typical case the plaintiff has agreed to pay $500,000 for a package of hardware and software, with an advance of $250,000 during the delivery and testing phase, and the remainder due three months later. When the package was delivered, the plaintiff had 30 days to test the package, and accept or reject it with full refund if it is rejected. At the 30th day the plaintiff rejected the package due to the fact that the query of a database over the WWW, using a browser as a client, boosts the response time from 5 seconds, on a client server network, to 25 minutes on the WWW. The defendant claimed initially that this is a minor problem and could be fixed within a few minutes. The minutes turn to hours, hours to days, days to weeks, weeks to months, and months turned to years. Four months later, the plaintiff wants his advanced $250,000 back. In contrast, the defendant refuses to payback claiming the 30 days have expired and demands the rest of the $250,000 due for the package.
The plaintiffs' expert witnesses use the ACM-published article Rushinek & Rushinek, "What Makes Users Happy" (1986) paper to support their case. They claim that the response time is crucial and the plaintiffs demanded their money back within the 30th day. The defending expert witness team uses a similar article from Journal Y, claiming that Journal Y is more credible, since it has higher ranking among academic researchers. The plaintiff claims that the higher academic ranking of Journal Y may be relevant to academic tenure decisions, but the search engine citation test is much more relevant to a court case.
The purpose of this study is to show that the search engine citation test is a very useful supplement to the tradition academic ranking of journals. In some cases, such as this, it may be even superior for litigation support,.
WWW Search Engine Takes Less Than 30 Seconds To Score Authors, Articles and Journal Publishers
Appendix A shows the results of searching http://www.google.com for: Rushinek, A. and Rushinek, S. "What Makes Users Happy." The search took approximately 0.23 seconds to complete and resulted in 45 citations, the first 10 of which are shown in Appendix A. The court can easily carry out this method of evaluation during a trial. All parties can understand and agree that it is an objective method. The higher the frequency of citations, the better the quality of the article and its related publisher. Using this method we took a sample of articles that we published during the year of 1986 and searched them in Google. We have also added a pair of other authors' works, since they have also published with Journal X and Journal Y. This removed the confounding variable of the relatively low ranking of some of the articles by academic MIS researchers.
Journal X Publisher Scores At least 10 Time More Citations Compared to the Next Best Journal Publisher
Our results show the citations of Journal X compared to other publishers for Rushinek & Rushinek (1986) and two other authors. For all authors and all journal publishers Journal X has a superior citation record by a factor of magnitude of at least 1 to 10. That means that for each citation of any other journal publisher (for the same authors, same number of authors (2), and same year) Journal X got at least 10 times the number of citations. This is especially surprising about Journal Y since academic researchers have consistently ranked it statistically significantly better than Journal X. We have repeated this test with several other authors and journal publishers and found the same pattern; therefore, we conclude that this is a pattern rather than an anomaly.
Summary, Conclusions and Implications
In summary, we have reviewed the literature of the traditional MIS evaluation articles of Journal Y. We have explained why such methods may not be completely appropriate for some circumstances. In such circumstances, we have demonstrated that less traditional methods of evaluation can have some advantages over the more traditional methods. Likewise, such methods can produce results that contradict the traditional methods and sometimes can be more relevant and more applicable.
This study shows a typical court case study and how the search engine citation journal evaluation method could be used in a court of law to supplement and even contrast the traditional academic ranking of journals. It demonstrates how sometimes "quick and dirty" evaluation methods may be faster, simpler, more practical, recent, objective, easily reproducible and most importantly more easily understandable to lay people who have no formal journalism education or the domain expertise.
Our conclusion is that for the purpose of forensic MIS expert witnesses, Journal X is a superior journal publisher as compared to the Journal Y. Therefore, we would expect to find more use for it in a court of law. However, this is beyond the scope of the current study, and may be reserved for future research. We have tested and rejected the null hypothesis that the differences in the number of search engine citations between Journal X and other journal publishers is statistically insignificant, in favor of the alternate hypothesis. The alternate hypothesis states that there are statistically significant differences in these citations.
Our implications are that in the future more studies should be done in the area of forensic computing and litigation support. In such studies, new measures of quality and performance of journal publishers, authors and their articles will emerge. Such studies will be more cross-disciplinary because they will involve a larger variety of professionals and non-professional participants.
Nikos Mylonopoulos and V. Theoharakis, "On-Site: Global Perceptions of IS Journals," Communications of the ACM, Sept. 2001, vol. 44, no. 9, 29-33
Michael Whitman, Anthony Hendrickson and Anthony Townsend "Research Commentary. Academic Rewards for Teaching, Research and service: Data and Discourse," Information Systems Research, June 1999, vol. 10, no. 2, 99-109
Bill Hardgrave and Kent Walstrom "Forums for MIS Scholars," Communications of the ACM, November 1997, vol. 40, no.11, 119-124
Kent Walstrom, Bill Hardgrave and Rick Wilson "Forums for Management Information Systems Scholars," Communications of the ACM, 1995, vol. 38, no.3, 93-102
Clyde Holsapple, Linda Johnson, Herman Manakyan, and John Tanner "Business Computing Research Journals: A Normalized Citation Analysis, " Journal of Management Information Systems, 1994, vol. 11, no. 1, 131-140
Mark Gillenson and Joel Stutz "Academic Issues in MIS: Journals and Books," MIS Quarterly, 1991, vol.15, no.4, 147-452
Rushinek, A. and Rushinek, S. "What Makes Users Happy," Communications of the ACM, July 1986, Vol. 29, No. 7, 594-598.
Rushinek, S. and Rushinek, A. "An Application Model for Computer Assisted Design and Manufacturing Using a Product Evaluation and Selection System," Computers & Industrial Engineering, 1987, Vol. 12, No. 3, 173-180.
Rushinek, A. and Rushinek, S. "Control Through Standard Costs and Variance Analysis for Performance Evaluation," Economic Planning, April, 1990, Vol. 26, No. 2, 3-8.
Rushinek, A. and Rushinek, S. "Accounting Software Evaluation: Hardware, Audit Trails, Backup, Error Recovery and Security," 1995, Managerial Auditing Journal, Vol. 10, No. 9.
Searching http://www.google.com for: Rushinek, A. and Rushinek, S. "What Makes Users Happy,"
Citation ... Communications of the ACM >archive Volume 29 , Issue 7 (July 1986) >toc What makes users happy? Authors Avi Rushinek Univ. ... 9-20. 12 A Rushinek , S ... portal.acm.org/ citation.cfm?id=6140&dl=ACM&coll=portal&CFID=15151515&CFTOKEN=6184618 - Similar pages
TOC... What makes users happy? Avi Rushinek, Sara F. Rushinek Pages: 594 - 598 > full text, Pygmalion ... Geoffry S. Howard, Robert D. Smith Pages: 611 - 615 > full text, ... portal.acm.org/ toc.cfm?id=6138&type=issue&coll=portal&dl=ACM&CFID=15151515&CFTOKEN=618461 - Similar pages [ More results from portal.acm.org ]
Designing Responsive Software ... 56-71. Rushinek, A. and Rushinek, S. "What Makes Users Happy?", Communications of the ACM, 1986, 29, pages 584-598. Shneiderman, B ... hci.stanford.edu/cs547/links/johnson.html - 6k - Cached - Similar pages
Reviewer Application You selected to review: Bibliographic Reference #: 6140. "What makes users happy?" Rushinek A., Rushinek S. Communications of the ACM 29: 594-598, 1986. ... www.reviews.com/reviewer/quickreview/ Frame_body.cfm?bib_id=6140 - 10k - Cached - Similar pages
CACM, Volume 29, 198 ... 593; Avi Rushinek, Sara F. Rushinek: What Makes Users Happy? ... 605-610; Geoffry S. Howard, Robert D. Smith: Computer Anxiety in Management: Myth or Reality? ..www.informatik.uni-trier.de/~ley/ db/journals/cacm/cacm29.html - 26k - Cached - Similar pages
[PDF]Wanted: Programming Support for Ensuring Responsiveness Despite ... File Format: PDF/Adobe Acrobat - View as HTML ... 1969. [Rus86] A. Rushinek and SF Rushinek. What makes users happy? ... 1991. [Zha93] L. Zhang, S. Deering, D. Estrin, S. Shenker and D. Zappala. ... www.hpl.hp.com/techreports/98/HPL-98-15.pdf - Similar pages
User's assessment of the Marketing Information Systems ... RUSHINEK, Avi & RUSHINEK, Sara F. What Makes Users Happy ?. ... 701p. VARNEY, S. IS takes charge of customer service, Datamation, 46-51, August 1996. ... www.lrv.eps.ufsc.br/IFIP-WG-9.5/Ifip-cd/ifip/1b3.html - 25k - Cached
[PS]www.dstc.edu.au/RDU/publications/ps_reports/trader-ui.ps File Format: Adobe PostScript - View as Text ... JASIS, 27(5):268- 272, September 1977.  A. Rushinek and S. Rushinek. What makes users happy? Communications to the. ACM, 29(7):594-598, 1986. ... Similar pages
[PDF]An evaluation scheme for trader user interfaces 1 INTRODUCTION File Format: PDF/Adobe Acrobat - View as HTML ... JASIS , 27(5):268-272, September 1977. 13 Page 14.  A. Rushinek and S. Rushinek. What makes users happy? Communications to the ACM , 29(7):594-598, 1986. ... staff.dstc.edu.au/andrewg/papers/icodp95/paper.pdf - Similar pages
Citation ... 3 Chin, JP, Norman, KL, & Shneiderman, 13. Subjective user evaluation of CF PASCAL programming tools. ... 9 Avi Rushinek , Sara F. Rushinek, What makes ... www.acm.org/pubs/citations/proceedings/ chi/57167/p213-chin/ - 35k - Cached