From blawlor at nfais.org Tue Jan 4 11:06:25 2011 From: blawlor at nfais.org (Bonnie Lawlor) Date: Tue, 04 Jan 2011 11:06:25 -0500 Subject: [nfais-l] Final Reminder - Conference Discounts Message-ID: <029301cbac29$5717b840$054728c0$@org> FINAL REMINDER: EARLY BIRD DISCOUNTS FOR THE 2011 NFAIS ANNUAL CONFERENCE END THIS FRIDAY Early bird registrations for the 2011 NFAIS Annual Conference are available only until this coming Friday, January 7, 2011. Discounts include up to $200 off the full registration fee and NFAIS members registering three or more staff at the same time receive even greater savings (for details see the registration form at http://nfais.brightegg.com/page/295-register-for-2011-annual-conference). The theme of the conference is, Taming the Information Tsunami: The New World of Discovery, and it will take place February 27 - March 1, 2011 at the historic Hyatt at the Bellevue in Philadelphia, PA. The three-day meeting will take a look at today's world of information overload and how publishers and librarians are navigating today's exponential growth of digital information to provide scholars and researchers with the reliable, relevant information that deserves their time and attention - no matter what the source, language or medium! Highlights Include: . A thought-provoking look at the complex problems and entrepreneurial opportunities offered by today's information explosion from Dan Gillmor, author of We the Media . Survey results from IDC on the sources of today's digital information explosion and the expected information growth rates in the coming years . A panel of librarians and researchers addressing how they are adapting to information overload - the tools they use, what works, what doesn't, and how their jobs have changed as a result . A look at the impact of digital technology on how we think and how we process and use information from Stephen Berlin Johnson, Contributing Editor, Wired magazine. . Case studies from the Library of Congress, Nature Publishing, and the Journal of Visual Experiments on coverage of content across all media and diverse languages to ensure the relevance and comprehensiveness of their products and services . The Miles Conrad Memorial Lecture given by award recipient Dr. Ben Shneiderman, Professor and Founding Director of the Human-Computer Interaction Laboratory, University of Maryland . Examples of the current and emerging technologies - cloud computing, augmented reality, semantic searching, machine thinking, automatic translation - that are shaping the future of information discovery * A look at the challenges to acquiring, preserving, and delivering the huge volume of information that comprises the U.S. national record from David Ferriero, Archivist of the United States * A discussion of the future of information discovery - how content providers, librarians, faculty, and users will efficiently and effectively navigate the tidal waves of content that are forthcoming. Attend the 53rd NFAIS Annual Conference and learn how learn how you can ensure that your products and services allow information seekers to successfully navigate the information tsunami. To register or obtain more information contact: Jill O'Neill, NFAIS Director of Communication and Planning (jilloneill at nfais.org or 215-893-1561) or visit the NFAIS Web site at http://nfais.brightegg.com/page/291-2011-nfais-annual-conference The National Federation of Advanced Information Services (NFAIS), 1518 Walnut Street, Suite 1004, Philadelphia, PA 19102-3403. NFAIS: Serving the Global Information Community -------------- next part -------------- An HTML attachment was scrubbed... URL: From jillmwo at gmail.com Wed Jan 12 10:14:13 2011 From: jillmwo at gmail.com (Jill O'Neill) Date: Wed, 12 Jan 2011 10:14:13 -0500 Subject: [nfais-l] Test Message, January 12, 2010 Message-ID: Test message for purposes of list maintenance. (Apologies to all for clogging up email inboxes!) -- Jill O'Neill jillmwo at gmail.com http://www.linkedin.com/in/jilloneill -------------- next part -------------- An HTML attachment was scrubbed... URL: From jilloneill at nfais.org Mon Jan 31 13:36:01 2011 From: jilloneill at nfais.org (Jill O'Neill) Date: Mon, 31 Jan 2011 13:36:01 -0500 Subject: [nfais-l] NFAIS Enotes, August 2010 Message-ID: <721D56996945477886DF2E8D5676CECF@DDPXRT91> NFAIS Enotes, August 2010 Written and Compiled by Jill O'Neill Scholarly Endeavors, Part I Most of those working in the realm of scholarly and research publication have encountered the concept of the Carnegie Classification System as applied to various institutions of higher education (IHE). The classifications were created back in 1970 as a means of differentiating between different sorts of IHEs and, as I referenced in the August 2010 issue of Enotes, over just a few decades there arose a recognition that the classifications were a source of competitive envy. At one point, a "doctorate granting university" had simply been judged by how many federal research dollars were poured into its doctoral degree programs. Currently, however, a doctorate-granting university may fall under one of three descriptive categories (Research I & II and Doctoral I & II; and Doctoral/Research-Extensive and Intensive) and the definitions of those categories have been dramatically changed since the 2005 revision (see: http://classifications.carnegiefoundation.org/descriptions/basic.php and http://www.carnegiefoundation.org/sites/default/files/publications/elibrary_ pdf_634.pdf). The Carnegie Classifications are intended to be primarily descriptive rather than prescriptive. But the cachet that goes with being (for example) a Research I institution is significant. Alumni bequests, gifts of endowed chairs and professorships as well as research grants tend to go to those IHEs that have already achieved distinction. Membership in the Association of Research Libraries (ARL) for example is tied to the classification of the parent institution under the Carnegie Classification as a research university (although ARL overlays an additional set of criteria pertaining to continuing commitment to and investment in the library). As institutions redefine the nature of the learning experience that they offer, the library behind that educational mission must also be redefined. To date, the library focus has been primarily on the transition between print and electronic resources - at least for the past twenty years. If we accept that the tipping point of that transition has been achieved, then the next question becomes how collections as well as the business models in support of access to those collections are redesigned or modified to serve a more fragmented, but potentially more tightly-targeted user population. One of the most obvious shifts has been the removal of on-site access to physical volumes within the library. Kent State's 2.9 million volume collection will be moved off-site by increments of 5% over the next decade in order to re-allocate space to other student needs, even as they make space as well in order to spotlight items within the library's special collections (see: http://kentwired.com/half-the-books-are-checking-out-permanently/). Library Dean, James Bracken, believes that is appropriate for a collection where a full quarter of the print holdings have never circulated. In 2006, Kent State was classified by the Carnegie Foundation as a "high research activity" institution (http://einside.kent.edu/?type=art &id=5954&) and it has maintained that standing as well as its standing in ARL. There have been other movements to adapt, one such being the 18-month-old initiative of 2CUL (http://2cul.org/). In this initiative, the Columbia University Libraries and the Cornell University Library, the two CULs of the acronym, decided that they would attempt a more collaborative approach to collection development and resource sharing. The first concrete example emerging from this initiative was the Slavic and East European Collection Development Agreement announced mid-September see: http://www.columbia.edu/cu/lweb/news/libraries/2010/20100916.slavic.html). The press release in this instance is worth reading for the quotes from John Micgiel, director of Columbia's East Central European Center and from Anne R. Kenney, Cornell's Carl A. Kroch University Librarian. The emphasis in both instances is on how this initiative supports the libraries in doing more with less in support of highly specialized user populations in deep research. Expedited inter-library-lending (ILL) is one element of the joint collaboration, but it is worth noting as well that part of that joint support involves one staff FTE being made available both virtually and physically to the communities at both institutions (see: Library Journal, http://www.libraryjournal.com/lj/community/academiclibraries/886842-419/corn ell__columbia_2cul_program.html.csp). In addition to moving books off-site and as an antidote to the poor ROI of non-circulating titles, there has been an increasing interest in the concept of patron-driven acquisition (PDA). The Online Dictionary of Library and Information Science (Libraries Unlimited, ABC-CLIO) defines PDA as "An e-book purchasing model introduced by NetLibrary in which selection decisions are based on input from library patrons. Working with the vendor, the librarian establishes an approval profile based on LC classification, subject, educational level, publication date, cost, and other criteria. E-book titles matching the profile are then shared with the library's community of users via MARC records in the catalog. Once a specific e-book has been discovered and viewed by a predetermined number of patrons, it is automatically purchased for the collection. Libraries with limited budgets can set spending limits for their PDA plans. Variations on this model have been developed by Ingram Digital's MyLibrary and by Ebook Library (EBL). Synonymous with demand-drive acquisition" (see: http://lu.com/odlis/odlis_p.cfm#patrondriven). The problem with such a model as mentioned by one speaker at ALISE in September 2010 mentioned is that the existing systems for the profiling-based purchasing approaches that have been in use for twenty years may be stretched to their structural limitations and pushing the limits of capacity (see: http://oranjarra.com/images/stories/best%20practice%20and%20cooperation%20in %20resource%20sharing%20among%20academic%20library%20consortia%20a4.pdf). The same speaker, Stephen Pugh of Oranjarra Partners, noted issues with each of the four basic approaches currently in place for purchasing ebooks: lack of tailoring to institutional need in the Big Deal, the higher costs in title-by-title purchasing, the uncertainty of patron-driven acquisition in the context of budgets (even as the book content can be proven to be used), and the challenges to vendor-supported systems in the context of approval plan buying. The Chronicle of Higher Education spoke with James Mouw, Assistant Director for Technical and Electronic Resources, University of Chicago Library, who indicated that from his perspective with regard to stand-alone scholarly monographs in electronic form, there were three main criteria to be considered in purchasing: (1) Both print and electronic forms had to be made available simultaneously in order to permit the library to select the form best suited to their needs; (2) Digital monographs had to be offered for purchase in such a way as allowed the library to pick and choose individual titles (in the hopes of licensing duplicates of titles in more than one aggregation of material). (3) Access to that digital content, once purchased, had to be in perpetuity. With regard to actual pricing, the article suggested that prices for print and digital had to be in line with one another (see: http://chronicle.com/blogs/pageview/e-books-what-a-librarian-wants/26777). And while the larger percentage of academic libraries don't generally purchase textbooks for their collections, its worth recognizing that this is another area where the demands surrounding ebooks from users are re-shaping the product of content providers. For a particularly engaging look at that aspect, you may want to view a Prezi presentation made by Rob Reynolds of Xplana to the AAP/PSP Books Committee (see: http://blog.xplana.com/2010/09/the-past-present-and-future-of-higher-educati on-textbook-publishing/). Reynolds identified nine trends with regard to the evolution of digital textbooks and e-learning content. Those trends primarily focused on the business models and the rise of open education resources (OER), but two critical points were also raised: * The development of a common XML format for e-text-books (which was characterized as a "modified ePub format with a standard DTD that could be extended by each user") and * The importance of devices and branded devices (that is, institutionally-branded versions of tablets to be given to students with pre-loaded publisher content) * (see: http://blog.xplana.com/2010/09/nine-important-trends-in-the-evolution-of-dig ital-textbooks-and-e-learning-content/). I found it intriguing that Reynolds anticipated that students were going to accept locked files on locked devices. It's just not that hard to play with the ePub format. For the past few months, I have experimented with ebooks. Reading on both the Amazon Kindle as well as on the Apple iPod Touch, I've accessed, consumed, and stored between 150 to 200 titles in various applications. Apart from those devices, I have sampled titles through Web-based, browser interfaces, for both the mobile as well as the desktop environment. For fun, you might look at the following video comparison of reading an EPUB title on a Kindle (http://vimeo.com/15015138) with a video demonstrating reading via an HTML5 enabled reader (http://vimeo.com/15826571). Both videos are from a European start-up, Booki.sh (their URL as well as their brand). The description of their service reads "It works in modern web browsers like Safari, Firefox, Chrome and Opera. You can read books on your iPad, iPhone, Blackberry, Kindle 3 and similar devices. And you can access your books online or offline." Monocle is the open-source software behind this particular interface and the publishers that I could locate using it were primarily trade-based publishers in Europe. However, as one critic noted, the issue is whether or not users are really going to always be connected at a decent speed and with an up-to-date browser. I've gone so far as to experiment with the Calibre software (http://calibre-ebook.com/) that enables a user to manage a collection of digital titles, converting an EPUB file to whatever format is best suited to their particular device. When a friend in academia asked me to give her my thoughts on an 1873 commentary on the Episcopalian prayer book intended for young adults of the time, I went looking for a Kindle-ready edition on Gutenberg and at the Open Library. Neither had a usable file, but at Google Books, the title surfaced as downloadable in either PDF or EPub (see: http://books.google.com/books?id=pfxDAAAAYAAJ &dq=The%20Bishop%20and%20Nanette&pg=PP1#v=onepage&q&f=false]). Converting that file through Calibre was a little time-consuming and the end product was riddled with OCR-caused typos, but I was eventually able to read the book on my Kindle. Subsequently another academic told me of WYSIWYG editorial tools, Sigil (http://code.google.com/p/sigil/) and the Atlantis Word Processor (http://www.atlantiswordprocessor.com/en/). While the tools are not particularly user-friendly, there's no reason why a motivated reader can't work past the barriers and manipulate existing EPUB files. Readers of light entertainment will happily make do with publishers' ready-made offerings, but academic users may want to work with and make available less well-known materials and it should be expected that tools such as Sigil and the Atlantis Word Processor will continue to be upgraded to serve those researchers. What emerges from such experimentation however is that Hugh McGuire was correct when he wrote for O'Reilly that "an .epub file is really just a website, written in XHTML, with a few special characteristics and wrapped up. It's wrapped up so that it is self-contained.so that it doesn't appear to be a website and so that it is harder to do the things with an ebook that one expects to be able to do with a website. Epub is really a way to build a website without letting readers or publishers know it" (see: http://radar.oreilly.com/2010/09/beyond-ebooks-publisher-as-api.html). Delivery of single volume monographs and textbooks in digital form represents an area of eager interest and rapid development. Academics have already mastered the concept of web sites for pedagogical purposes; mastering an ePub format (even a modified one) isn't much of a stretch. Naturally not all serious researchers or faculty will want to be bothered, but some percentage will. In September, the European Foundation for Quality in E-Learning met for their annual conference. One of the papers delivered there was centered not just on the creation of open educational resources, but also on the creation of open educational practices. Specifically, the report on the OPAL project was oriented towards the creation of engaging learning experiences - a focus on activity and use rather than on ingestion of content (see this set of slides, particularly slide #10 at http://www.slideshare.net/grainne/conole-lisbon). While the speaker was clearly communicating awareness of the gap between vision and practical reality and the overall immaturity of the open educational resource movement, the hope is that education will be reformed (in the best sense). By slide #32 where the speaker, Grainne Canole, Professor for Educational Technology, The Open University, UK, is listing questions for further investigation, we see references to the need for more sophisticated tools and resources and calls for innovation in the use and reuse of the open repositories that are only just now emerging. More on this initiative may be found at http://www.oer-quality.org/. The metrics by which we evaluate educational institutions are changing. As a result of that shift their libraries are changing and both the nature and use of as well as attitudes towards the content held in those libraries is changing. If you are only looking to the short term (3-5 years out) the landscape of this marketplace may not be that different, but the longer term outlook - say 2020 - may well be very, very different. ************************* Want to learn more about portable devices and the e-reading experience? Look for the NFAIS webinar on this topic to be held early next year. Also, early bird registration discounts for the 2011 NFAIS Annual conference end on January 7th. Until then savings of up to $100 off the full registration fee are available and NFAIS members registering three or more staff at the same time receive even greater savings (for details see the registration form at http://nfais.brightegg.com/page/295-register-for-2011-annual-conference). 2010 SPONSORS Accessible Archives, Inc. American Psychological Association/PsycINFO The British Library CAS Copyright Clearance Center CrossRef Data Conversion Laboratory Defense Technical Information Center (DTIC) Getty Research Institute H. W. Wilson Information Today, Inc. Office of Scientific & Technical Information, DOE Philosopher's Information Center ProQuest Really Strategies, Inc. Temis, Inc. Thomson Reuters Healthcare & Science Thomson Reuters IP Solutions Unlimited Priorities Corporation Jill O'Neill Director, Planning & Communication NFAIS (v) 215-893-1561 (email) jilloneill at nfais.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jilloneill at nfais.org Mon Jan 31 13:36:07 2011 From: jilloneill at nfais.org (Jill O'Neill) Date: Mon, 31 Jan 2011 13:36:07 -0500 Subject: [nfais-l] NFAIS Enotes, September 2010 Message-ID: <0B6864D7B8384E0A9A8989AFDFE7B6FE@DDPXRT91> NFAIS Enotes, September 2010 Written and Compiled by Jill O'Neill Scholarly Endeavors, Part II In August of 2010, the Apollo Group put out a position paper entitled Higher Education at a Crossroads. If you are unfamiliar with the organization, the Apollo Group is the parent company of for-profit educational institutions such as the University of Phoenix (U.S.) and Meritus University (Canada), as well as other proprietary entities aimed at offering degrees to working adults. The position paper emphasizes that the educational models in place at these proprietary institutions are better suited to a non-traditional student population (older, self-supporting, balancing work and dependents, etc.). It further underscores the idea that for-profit institutions are absolutely necessary to enabling the United States to build an appropriately educated workforce. Page 22 of the report states that in order to meet President Obama's national education goal, "the system will need to accommodate 13.1 million graduates. At a time when states are having difficulty even maintaining budgetary resources for higher education and are cutting both faculty positions and student enrollment capacity, how can states afford to educate tens of millions of additional students and produce 13.1 million additional college graduates?" The report comes up with a figure of $794 billion in federal, state, and local support that would be required to meet the need for the educated workforce referenced by Obama in his speech on economic growth at Texas A&M on August 9. The President had noted that "Over the next decade, nearly eight in ten new job openings in the U.S. will require some workforce training or postsecondary education. And of the thirty fastest growing occupations in America, half require at least a 4-year college degree." (See Higher Education at a CrossRoads: http://www.apollogrp.edu/Investor/Reports/Higher_Education_at_a_Crossroads_F INALv2[1].pdf and Restoring America's Leadership in Higher Education (Remarks by President Obama to Texas A&M, August 9, 2010) http://www.politico.com/static/PPM169_restoringamerica.html). When the Chronicle of Higher Education reported on the publication of the position paper in the context of a negative report from the Government Accountability Office on recruitment practices by for-profit educational institutions, an interesting discussion broke out in the comments (see: http://chronicle.com/article/With-Statistics-Heavy-Report/124101/). Some respondents believed that the for-profit entities were justified in putting forward a case for their existence, while others criticized the quality of education provided by those institutions in training potentially at-risk students (see the GAO at http://www.gao.gov/new.items/d10948t.pdf). This is an old debate, one that suggests both a defensive posture on the part of traditional institutions of higher education (IHE) that are seeking to avoid any siphoning-off of revenues received through federal and state subsidies, and an overly optimistic posture on the part of the for-profit entities. There are rigorous constraints on budgets for most IHEs, just when the need for a more educated work force is being stressed as integral to growing the economy. And because of those constraints, one also sees discussions questioning whether research activities should be funded (or dispensed with) across these institutions. A random tweet took me to a Georgia Tech faculty blog posting that asked why universities should do research (see: http://wwc.demillo.com/2010/07/05/why-universities-do-research/). Rich Demillo, Distinguished Professor of Computing and Management at Georgia Tech, noted that for many universities it was a losing proposition, noting that one provost told him that for every research dollar coming in, they were spending $2.50 due to an imbalance between man- hours spent on instruction vs. man-hours spent on research. He noted as well that the hope of commercializing and licensing intellectual property for most institutions was equally unprofitable. Institutions are motivated to aspire to become Carnegie I schools more out of "institutional envy" than any other motive, even though innovative IHEs may find other ways of thriving. His point (amidst Georgia's significant educational funding crisis) was that differences between teaching universities and research universities should be recognized and funded appropriately without penalizing the institution for being one or the other. Another academic (also from the Georgia Tech community) noted in January that perhaps there were, in fact, things that the for-profit educational institutions could do more successfully and/or more economically than Georgia Tech (see: http://rjlipton.wordpress.com/2010/01/29/an-educational-extinction-event/). If the role of the university is to educate, socialize and aid students in building effective professional networks, then the University of Phoenix might well be more suited to doing that in an online environment while the likes of Georgia Tech refocused its attention on research and innovation. In an article in Forbes discussing the potential of online learning for higher education in the US, Taylor Walsh noted that the University of California had announced a pilot project for a "large slate of online introductory courses across its ten campuses. If they pass muster at the culmination of the pilot, these Web courses could eventually be used to teach the universities' own undergraduates or expand the UC student body by appealing to new audiences, easing bottlenecks in crowded campuses or providing a desperately needed revenue stream." She concludes that until prestigious institutions such as Yale and the University of California find a way to fully embrace online distance learning, their economic models will be questioned (see: http://www.forbes.com/2010/08/01/online-classes-internet-technology-opinions -best-colleges-10-walsh.html). Why are universities clinging to what appears to be something of an out-moded economic model? According to an op-ed appearing in Inside Higher Ed, "the main reason is that universities do not want to admit to the public that student dollars and state funds are spent on other things than instruction and related research. As many professors have told me, they do not believe that the public would support the research mission of the university, so the university has to hide how it spends its money" (see: http://www.insidehighered.com/views/2010/06/04/samuels). Lest you think that this is primarily a U.S.-centric concern, European entities are also thinking about the problem. Frank Gannon, Director General of the Science Foundation Ireland, the major funding agency fueling research in higher education in Ireland, ruminated on issues of appropriate funding when he referenced the mythical Roman deity, Janus, and the necessityof mixing corporate and government funding of research in universities (see: http://frankgannon.wordpress.com/2010/08/26/society's-janus-view-of-research ers/). The duality, as he points out, leads to mixed public trust in scientific results, contaminated as such findings may be by commercial interests or political ideologies if either funding source is permitted to dominate. Within the framework of scholarly publishing, it is understood that the formal auspices under which any research is done tends to influence how results from that research will be framed, disseminated, and publicized to a broader audience. That understanding is less wide-spread outside the information community. During the month of August, I noted a higher-than-usual volume of noise about the concept of peer-review and its purpose. There were articles in the New York Times and in the Chronicle of Higher Education, as well as entries by various contributors to the Scholarly Kitchen blog. What made me sit up and pay closer attention however was a screed that appeared in the UK science and technology publication, Spiked. Two authors of a somewhat controversial book, The Spirit Level: Why Equality is Better for Everyone, had responded to three semi-professionally published critiques of their work and then announced that any future discussion of the book's findings would only receive response if such discussion were published in a peer-reviewed journal. The Editor of Spiked called that "an extraordinary condition on future debate about their book." His editorial made the point that peer-review was (strictly speaking) not a judgment made by the author's peers of whether a finding was the final word on a subject. It was rather an indicator of whether or not the primary investigator had done a proper write-up of a research inquiry fit for publication - that is, found to be grounded in the literature, performed through a proper protocol or methodology, with findings that had not been fabricated. "There is a censorious dynamic at play here, as a divide is erected between those who are peer-reviewed and those who are not, between those who we should listen to and engage with and those we should look down our noses at - in effect between those who say mainstream, acceptable things and those who spout off-the-wall, experimental stuff" (see: http://www.spiked-online.com/index.php/site/article/9383/). It is the phrase "censorious dynamic" to which I would draw your attention, not because it conjures up the notion of censorship (likely the author's intent), but because there is always a conflict between those whose investment of time and study gives them the elevated status of authoritative credentials and the resentment of those who know themselves to be less-well-educated on a topic, but who don't believe themselves to be incapable of understanding, if offered the opportunity to learn. The expectation is that there will be an interactive exchange in order for both sides to understand. That conflict is at the heart of our concept of the Academy and the investigative process. Like so many social processes, peer review is a series of human exchanges, hampered by social ineptitude as much as by professional reticence. Kent Anderson on the Scholarly Kitchen blog put it most bluntly when he wrote about recent instances where bad science was published in ostensibly respectable publications. He said "The entire scientific publishing genre is losing credibility with the public, putting the article, the journal, and the peer review process at risk" (see: http://scholarlykitchen.sspnet.org/2010/08/02/left-handed-cancer-box-springs -scientific-american-branding-and-trust/). The conflict is evident not just in discussions of science and peer-review, but also in discussions of the roles of libraries in the Digital Age. How much does society need gateways and guardians? And at what point in the process, ought such filters to be introduced or withdrawn? At some point, I was directed to a research project known as the Liquid Journal (http://project.liquidpub.org/). The project is funded by the European Union and supported by a variety of international entities including the commercial STM publisher, Springer Science. The founders believe that the production processes surrounding the creation of scientific knowledge is inefficient, with specific reference to the need for creating formal written materials (articles) and the peer-review process required to vet those materials. Essentially the project wants to take the creation of scientific knowledge out of the realm of the formally published article as formed in a print environment, and into the realm of something more closely approximating a real-world laboratory where data and simulations are made re-usable. The organization calls for the development of "novel services and business models" (see a useful descriptive paper at: http://www.almaweb.unibo.it/all/doc/upl/s1/pdf/WOAPAPERSITO/6/8.7.%20Camusso ne%20Cuel%20Ponte.pdf). The entry pointing me to the Liquid Journal project was from the BMJ Group Blog where Richard Smith, a Public Library of Science Board Member, praised the initiative, but recognized that its success would be hampered by the scientific community's conservatism, the reward system that connected high-impact publications with tenure, and by the collective inertia of the publishing community itself (commercial publishers as well as scientific societies; see: http://blogs.bmj.com/bmj/2010/08/05/richard-smith-enter-the-%E2%80%9Cliquid- journal%E2%80%9D/). Smith believes that a pre-publication peer review process is no longer effective, and believes (perhaps unsurprisingly) that the most effective peer-review process happens after an article has received maximum exposure within the knowledge community. "Much better to have posted the paper on the Web and let the world decide its importance or lack of it and for the reviewers to have got on with researching." Post-publication peer-review is not always satisfactory, either. UCLA Professor of Emergency Medicine, David Schriger, wrote in The British Medical Journal that "The solution to the absence of effective post-publication reviews does not lie within its mechanisms; it requires a fundamental reworking of what research is performed, how it is presented, and how it is assimilated into current knowledge. We need fewer papers that are of higher quality and importance. We also need a change in culture to value public discussion if we are to re-engage the medical research community in the kind of post-publication review process that patients deserve." That is a call to action difficult to resist (see: http://www.bmj.com/content/341/bmj.c3803.full). I sense a number of questions arising for which there are few available answers, but which have significance for the services offered by NFAIS members: * Has the economic model of education and research housed and funded within a single institution been exhausted? * Ought the two functions to be separated for purposes of ensuring that both remain economically viable across a range of institutions? * How might societies (i.e. governments) best re-allocate funding resources for instructional and research efforts across the spectrum of available providers? * If instruction and research become divorced, what will the cascade effect for information providers be? That we have to frame such questions suggests that the process by which the Western world generates research and subsequent authoritative knowledge is near to being re-engineered for greater efficiency and productivity. However those questions may get answered, the markets will continue to shift. ******************************* Early bird registration discounts for the 2011 NFAIS Annual conference end on January 7th. Until then savings of up to $100 off the full registration fee are available and NFAIS members registering three or more staff at the same time receive even greater savings (for details see the registration form at http://nfais.brightegg.com/page/295-register-for-2011-annual-conference). 2010 SPONSORS Accessible Archives, Inc. American Psychological Association/PsycINFO The British Library CAS Copyright Clearance Center CrossRef Data Conversion Laboratory Defense Technical Information Center (DTIC) Getty Research Institute H. W. Wilson Information Today, Inc. Office of Scientific & Technical Information, DOE Philosopher's Information Center ProQuest Really Strategies, Inc. Temis, Inc. Thomson Reuters Healthcare & Science Thomson Reuters IP Solutions Unlimited Priorities Corporation Jill O'Neill Director, Planning & Communication NFAIS (v) 215-893-1561 (email) jilloneill at nfais.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jilloneill at nfais.org Mon Jan 31 13:36:15 2011 From: jilloneill at nfais.org (Jill O'Neill) Date: Mon, 31 Jan 2011 13:36:15 -0500 Subject: [nfais-l] NFAIS Enotes, October 2010 Message-ID: NFAIS Enotes, October 2010 Written and compiled by Jill O'Neill Smart Content In late 2010, there was a small gathering in New York of professionals interested in the topic of analytics. Seth Grimes of AltaPlana had organized this Smart Content event. Laying the groundwork early on, he had solicited input from a variety of experts on the definition of "smart content," how an enterprise might benefit from access to such content and the technologies that content providers might choose to integrate in existing product in order to make it more attractive to those enterprises (see http://www.informationweek.com/news/software/bi/showArticle.jhtml?articleID= 228901459 &queryText=smart%20content ). Among those whose expertise were sought were an Elsevier technologist, a researcher from Xerox PARC, and an analytics analyst. The summation of their input (from my perspective) was that smart content had characteristics of mark-up and structure that allow it to be flexibly manipulated through automated means. In conjunction with a variety of technologies, patterns and relationships associated with that content could be exposed and enhanced to improve discovery of relevant material in a broader range of contexts and work flows. There is a growing belief that systems of this sort will reduce cognitive strain on users in uncovering the right piece of material they need without them having to necessarily know in advance the exact set of query terms or the most appropriate search approach. Instead the onus is on the system to "recognize" user behavior in a particular context and match exactly the right chunk of content to that user's current task or concern. Mark Stefik of Xerox PARC commented, "This shift potentially employs more resources, more knowledge and more points of view in matching people to content. It opens the door to a much richer process of intermediation between people and content." And a Gilbane Group study referenced by Grimes noted that "smart content is a natural evolution of XML structured content, delivering richer, value-added functionality." It is in that context that I quote futurist Arthur C. Clarke, "Any sufficiently advanced technology is indistinguishable from magic." Workers want to have their needs anticipated and there can be real benefits to detection of and better understanding of hidden usage patterns and behavior. Are the technologies that support smart content sufficiently advanced to pass as magic? Jeff Fried, Chief Technology Officer at BA-Insights, offered a little reality to the Smart Content attendees when he noted that, at present, there is no one-size-fits-all solution for the creation of smart content. Instead, providers might need to assemble different components from the current grab-bag of technologies in order to resolve specific user difficulties. That single point might have been the most significant take-away that any attendee might have gathered from this event - the recognition that no single offering would necessarily suffice in satisfying the needs of a particular user population. In order to achieve that moment of seeming "magic" for users, savvy publishers would have to examine a variety of options. "Pearl-growing" was his descriptive phrase for the kinds of combinations that content providers would need to develop context-specific solutions. And Fried cautioned his audience that it was going to be necessary to manage expectations during this phase of development. (The video of Fried's presentation may be found at http://vimeo.com/16349851 and his slides may be viewed on Slideshare at http://www.slideshare.net/SmartContent/what-business-innovators-need-to-know -about-content-analytics ). An overly rapid series of Lightning Talks from vendors throughout the day served to illuminate discrete possibilities that might go into diverse solutions. There was, as just one example, the Rosette Linguistics platform offered by Basis Technology, aimed at "extracting meaningful intelligence from unstructured text in Asian, European and Middle Eastern languages," quoted from their home page at http://www.basistech.com . Basis Technology currently partners with NFAIS member organization TEMIS, another presenter at the Smart Conference event. Temis was presenting on their Luxid suite of content enrichment product, successfully deployed by such organizations as Elsevier, ThomsonReuters and AAAS. Other modules of Luxid include semantic technologies in support of scientific discovery, sentiment analysis, and competitive intelligence. In the case of search and analytics technology, such as that provided by FirstRain (http://www.firstrain.com ), the system crawls the Web for specifically factual well-structured documents (organizational charts, product lines, etc.) which it then analyzes to derive, distill and organize models that could then be dynamically adjusted based on the rate of change within a specific market or industry. FirstRain's technology has been leveraged primarily in the realm of the investment and banking industries, fueling such companies as Fidelity and information services such as those offered by Standard & Poor's. Another company present at Smart Content, but more focused in the field of sentiment analysis, was Linguamatics (http://www.linguamatics.com/ ). This UK firm was referenced by The New York Times because their tool's analysis of Twitter postings for the UK election accurately predicted the outcome of that election (see "Nation's Political Pulse Taken Using Net Chatter," The New York Times, Oct 31, 2010, http://www.nytimes.com/2010/11/01/technology/01sentiment.html) . Linguamatics' product is also favored by some pharmaceutical firms (Pfizer, Merck, and AmGen among others) on the basis of its usefulness in agile text-mining. During the presentation by Darrell Gunter (Elsevier veteran, recently moved to AIP), reference was made in passing to the Semantic Wave report by Mills Davis, founder and director of the 10X project (http://www.project10x.com/about.php ). The executive summary of that report is useful for its positioning of various technologies intended to leverage "the...Web of connected intelligences." It notes several technology trends as driving this next phase of Web development: * intelligent user interfaces enhancing user productivity and satisfaction * collective knowledge systems as "killer" apps * semantic applications including, but not limited to, ontology-driven discovery in a range of professional fields (law, medicine, defense, etc.) * semantic infrastructures in support of integration and interoperability * semantic modeling and solution development In other words, these technologies are in support of increasingly complex information systems - what the Semantic Wave report characterizes as those representing "meanings and knowledge...separately from content or behavior artifacts," rendering both understandable by people and machines. Such technologies are still in a relatively nascent stage of development in the sense that even those technologies referenced above that have been introduced into the market have yet to reach a point of adoption where they are considered entirely mainstream. They are certainly being implemented in a variety of contexts (pharmaceutical, legal, financial, business, etc.), but the average user sees only a new tweak to an interface, a dashboard or a result set without understanding what's going on in the hidden black box behind. That's their claim to creating "magic" for the user. The leveraging of these technologies will be the next step in highly-specialized information environments. Most of us have been in an environment where a research professional has stated in a matter-of-fact manner that he/she knew everyone who was working in a particular space surrounding a scientific question or challenge. A success story frequently put forward is that of Collexis, acquired by Elsevier in mid 2010, a technology that leveraged the relationships between researchers in ways that enabled institutions to better capture and recognize researcher productivity while enhancing the ability of the individual researcher to identify new entrants into a given field in the interests of building new collaborative efforts. Speaking very generally, Collexis is a sophisticated mixture of entity extraction, pattern detection and data-mining. This was the point made by Richard Stanton at the Smart Content conference. As many within this community are aware, a huge challenge is disambiguation of a specific vocabulary term or phrase when extracted from its placement in context (Madonna the singer vs. Madonna, the religious figure). It's the use of the language surrounding a term or phrase that a system must be capable of analyzing in order to be statistically confident that something is related or relevant to a particular query. Taxonomies and ontologies continue to play a role, but not perhaps a stand-alone role; they offer the greatest value-add in conjunction with semantic technologies. All of this said, it was an impressively attractive presentation by two sharp young women from IQ Content, an Irish user experience design consultancy, that crystallized for me the problem in the room on that October day. Randall Snare and Katie McGuane were present to discuss interface design and the creation of a seamless flow in smart content environments. Their approach was that design had to achieve a balance of data (between analytics and content) and that the design team is made up of individuals in three roles: * a user experience designer, * a content strategist, and * an analytics expert Their case study involved a problem from an insurance provider who wanted to make it simpler for customers to select a policy, but their discussion of why the final solution contained a three column design rather than a five column design was (for me) the issue that few in this discussion of smart content had addressed. Three options is less confusing than five and in most instances, the user will choose the policy that appeared in the middle. The young women somewhat uncomfortably acknowledged during the course of their program segment that design solutions could manipulate the ultimate choice of the user on that site. It's just too easy to drive the user's choice. Even worse, an unscrupulous provider could easily ensure that the system consistently displayed a middle option most profitable to the organization rather than the "right" choice for the buyer. Of course, buying an insurance policy is not the same as identifying an answer in a legal or investment information product, but the dashboards and interfaces found in these smart information environments can all too readily slant a user's perception of relevance or importance. Seth Grimes' Smart Content one-day event wasn't the correct venue in which to raise the issues of information bias and objectivity or the privacy pitfalls associated with tracking users' information-seeking behaviors. It was a day intended to offer publishers a glimpse of available options in constructing the best information environment for their users and it was an interesting array. The semantic technologies that exist now can assist NFAIS members in resolving linguistic and translation issues thereby making content more discoverable and, yes, in the right combination, expose patterns and relationships that help researchers approach problems with agility and with a better grasp of those aspects that may previously have hidden the solutions. There are undoubtedly benefits that will accrue from content housed in these smart environments and, as always, the needs of the professional (legal, financial, scientific, medical, etc.) will drive the immediate implementations. For the enterprise, development of smart content platforms may well have to be a priority in remaining competitive (see for example this piece by Gilbane about the smart content landscape as it applies to the enterprise: http://gilbane.com/xml/2010/11/understanding-the-smart-content-technology-la ndscape.html). In that context, smart content is characterized by enriched content and metadata, component discovery and assembly, collaboration, and federated content management (useful in minimizing duplication of material within the networked environment). The gap between what is offered to the professional market versus that offered to the library market is dramatic. In the Gilbane piece referenced earlier, it closes with a plug for their willingness to consult with businesses on identification of the following: * The business drivers where smart content will ensure competitive advantage when distributing business information to customers and stakeholders * The technologies, tools, and skills required to component-ize content, and target distribution to various audiences using multiple devices * The operational roles and governance needed to support smart content development and deployment across an organization * The implementation planning strategies and challenges to upgrade content and creation and delivery environments Any buzz about integrated library systems and where those fail is fairly remote from those types of buzzword bullet points, and yet the concerns for libraries in the delivery of smart content isn't very far removed. Remove the word "business" from the initial statement and substitute the word "institution" for organization in the third point and it's essentially what must be done to sell "smart content" to any Carnegie I research facility. But it does require thinking about the role of a content provider in new ways. Are you building that kind of an advanced information service? How long before one of your competitors is ready to offer it? Three to five years is not an unrealistic time frame. NOT REGISTERED YET FOR THE ANNUAL CONFERENCE? GO TO: http://nfais.brightegg.com/page/295-register-for-2011-annual-conference). The cut-off for discounted hotel rooms is February 7, 2011. 2011 SPONSORS Access Innovations, Inc. Accessible Archives, Inc. American Psychological Association/PsycINFO American Theological Library Association CAS CrossRef Data Conversion Laboratory Defense Technical Information Center (DTIC) Elsevier Getty Conservation Institute H. W. Wilson Information Today, Inc. International Food Information Service Philosopher's Information Center ProQuest Really Strategies, Inc. Temis, Inc. Thomson Reuters Healthcare & Science Thomson Reuters IP Solutions Unlimited Priorities Corporation Jill O'Neill Director, Planning & Communication NFAIS (v) 215-893-1561 (email) jilloneill at nfais.org -------------- next part -------------- An HTML attachment was scrubbed... URL: