Jet Paper

In this blog post I wish to propose the development of a new piece of open-source software which can populate any piece of digital text with visual media. I will put look at the evidence for how users interact with digital reading today, how new software could combine with existing technologies to deliver an enhanced reading experience, which includes the reader, assists the publishers requirements to reach new readers, and to benefit the original vision of the author.

For simplicity I will be focusing on Photography with text and not focusing on video, audio or graphics in this blog post. I will not be presenting a working model of the software.

This blog post makes the following assumptions about the reader, publisher, and author:

* The reader wants a good story, but doesn’t want visual media to be forced upon them.
* The publisher wants to enhance books with minimal effort, time, and cost.
* The author wants to reach more people with their idea/story.

My challenge is to preserve the traditional written page, yet to enhance it if the reader wants to do so, at no cost to the publisher, and at no extra effort to the author.

Let us first look at what is wrong with adding digital content to a page of text.

Heat map studies of web pages have proven that the more the reader is given a choice of place to rest the eye, the more they will jump about looking for the story that interests them (see below):

census-homepage-heatmap

Heat map studies have proven that how we distribute medias in texts affect how we consume that content. To preserve the linear reading format of the traditional written page, we cannot scatter our medias liberally about that page.

Next let’s consider Reader Response theory: a reader adds their own subjective, interpretive meaning to the text.

“The text is a program designed to produce events in readers’ minds, not all of them ‘correct.'”

http://faculty.goucher.edu/eng215/reader_response_terms.htm

“Both the reader and the text work together to produce meaning. They are partners in the interpretive process.”

http://www.westga.edu/~dnewton/engl2300/rrlecture.html

No matter how much an author tries to convey their idea, the reader will form their own meaning.

I would like to propose a software solution, which I have called, Jet Paper, which benefits the reader, publishers and authors, favours the traditional page and automatically adds a layer of digital content which does not distract from the reading.

Jet Paper in its simplest form, is software that is added to a piece of written text in any stage of its production, scans that text, picks out keywords, searches the web for various medias, pulls it to the device and is ready to present that content whenever the reader decides to view it.

To give more clarity, and using photography as an example, the photos are ‘cached’ in the devices’ memory and can be viewed with a simple gesture swipe of the finger. If the reader agrees with the photos shown, and feel the photos matches, adds to or improves their vision of the story, they can then ‘tap’ to accept and store the photo, or they can swipe to remove it from their story.

This process could be used for any form of digital media found in the keyword search. Each piece of media is found, cached, selected or deselected and then stored as part of the story under my own Jet Paper reading profile for that book.

Jet Paper will be able to scan the entire book and present a collage of images back at the reader in the form of a cover image. This feature is of particular benefit to the publisher as it will serve as part of the consumers’ buying decision, much like a cover design or back page synopsis does today. With this, Homers’ Iliad, or Dostoyevsky’s Crime and Punishment may discover a completely new reading market. No longer will we have to rely on the publisher or designer to present a cover which attracts the reader. We will be presented with a cover which actually reflects the content (themes, locations, names, items, etc.) of the book.

This flexibility of cover/collage, also aids the very place where the reader enters the book. If I, as a reader, decide to share a particularly gory chapter from a horror or crime novel, to a friend, they will enter the story at that point. Jet Paper will be able to show them media which best represents this point in the story they are interested in.

The screen recording below is an example of how text could be scanned and pool images ready to be served to a reader. Pay attention to the grid of images which appear as the text is quickly analysed. This is created from publicly available media (Please note copyright is addressed towards the end of this post).

The above demonstration pulls the photography in a grid to the side of the text, whereas Jet Paper would house the content behind the text, and revealed with a swipe as per the image below:

PullDowns

Above is an example of how the media could be presented on a tablet. After caching the page content, the reader swipes down from the top of the page to view the imagery.

Just as Google can pull imagery based on Keywords, Jet Paper will behave in much the same way. This demonstrates how the technology is already available - and must be pointed out that though this is a Google website, the search algorithm is not mutually exclusive to Google.

Just as Google can pull imagery based on Keywords, Jet Paper will behave in much the same way. This demonstrates how the technology is already available – and must be pointed out that although I am using a Google search as an example, the results are only from a few keywords. Jet Paper will deliver based on the page, pages, or the entire novel.

Below are some more examples of how effective keyword searches can power Jet Paper to propel the story for the reader who wants to know/see more:

Jet Paper will also be able to stitch together photographs and present an almost 3D experience from the enormous range of freely available media on-line. Below is an example of this stitching in action, for those readers who have never walked around the Arc De Triomphe:

Jet Paper gets better the more people read and interact with it. Whilst reading, I may select imagery I love, or once thousands of on-line readers contribute allow Jet Paper access to their book profiles, I will have the option of viewing, accepting or rejecting the consensus of the group.

This group will also act to validate, monitor and report on the crowd sourced content being associated with the text: the more people ‘vote up’ an image, the more likely it will become a default image; the more people ‘vote down’ an image, the likely it will disappear from the pool altogether.

From the publishers perspective, they can run the Jet Paper software on their titles, check and select imagery prior to distribution and seed it with dedicated imagery which promotes the story. This will be particularly useful when creating the initial pool of content which is used in the cover image collage. The publisher will also be able to set a primary, single image which acts as the main cover for the book where the image size will be tool small to see the collage. For example, Amazon thumbnails.

This level of control for the publisher, will be a key contribution factor in the success and adoption throughout their range of titles. They will be give access to additional Jet Paper settings that allow them to control whether the content is supplied solely by themselves, shared on-line at all, or interacted with by the reader.

For the author, they can choose to write as they always have, or if they wish, to contribute their own media to the Jet Paper software. Either as they research, write, or edit the story. Some authors may visit the location which inspires them and wish to add their photographs to the meta data for the story. They may wish to record some sounds with their phone and add that too, just like an author would build a scrapbook of clippings. Only later towards publication will they need to decide its relevance and inclusion in the Jet Paper file, or simply let the readers decide.

Technologies required:

Jet Paper is powered by existing technologies, which include, but is not limited to: Linked Data, Google’s Knowledge Graph and Facebook’s Open Graph ProtocolZemanta media algorithm, RSS feeds and APIs from various publicly available media sources. Some of these technologies are freely available open-source, and some will need to be licensed, particularly in the case of Zemanta. Jet Paper is written in Ruby-on-Rails open-source code for maximum reach across all devices, producing a portable, lightweight app.

Regarding media usage rights:

Creative commons imagery will be utilised, however deals will be done with content providers (Flickr, Facebook, Getty Images, etc., in the case of photography) to allow producers to ‘opt in’ to this content pool and to be credited (either financially, or by name) for their work used.

As with many areas of social media content, creators are often happy to be used if they are given credit for the work. In the case of fan fiction, or any loyal fan base, many readers will actively contribute medias to improve the reading experience.

In conclusion

Jet Paper software will bring new and old texts to life like never before. For the author, nothing will change unless they want to add additional material to their work. For the publisher, media enhancement will utilise the content freely available on-line, with moderation/promotional controls, and yet monitored & promoted within the fan base or community who most enjoys the story. The reader will get a more rounded understanding of the story itself, with regard to the front cover image choice, and the original vision the author intended. The software exists already, yet is fragmented across a variety of technologies. Unifying these will bring vivid reading experience to a level not achieved by existing software or on-line collaborative methods.

Advertisements