Metadata & Internet.

avatar

What are metadata?

programming-1873854_1280.png

[credit ]

Today's Internet is a gigantic, colossal, magnificent jumble. Spiders, machines, screen scratching, and plaintext searches are all examples of desperate attempts to make arbitrary distinctions between needles and hay. And they just go as far as the information we've taken the time to put online.

Peer-to-peer networking hopes to transform the desktop, laptop, palmtop, and refrigerator into peers, conversing with one another and making large portions of their data stores accessible online. Of course, exposing even a small percentage of the resources managed by each system on the network would worsen the issue by piling on more hay and needles in heaps. How can we deal with the massive logarithmic explosion of data from a variety of sources?

The modern protocols for peer-to-peer applications, which are being built at breakneck speed, add to the mess by disconnecting data from the comparatively bounded arena of the Network and the ubiquitous port 80. Loosening the hyperlinks that connect all of these disparate tools risks scattering hay and needles to the winds.

Whereas we previously had application user interfaces for each and every information system, the Web provided us with a single user interface - the browser - as well as an organizing concept - the hyperlink - that theoretically enabled us to access all of the content. Peer-to-peer could reverse all of this progress and return us to the dark ages of one application per application form or application service.

Can JPEGster and Palmster be far behind? We already have Napster for MP3s, and work on Docster for documents has begun; can JPEGster and Palmster be far behind?

And how can we scan these disparate, transient clumps of data, which appear and disappear as our devices go online and offline, let alone find them in the first place? Napster is kept up as proof that everything will work out in the end. The inherent ubiquity of any single MP3 track circumvents the resource transience problem.

student-849822_1280.jpg
[credit ]

Isn't this abundance, however, simply the product of its constrained problem space? MP3 files are commonly used, and MP3 rippers make it simple for a large number of people to generate high-quality MP3 files.

When the industry focuses on peer-to-peer technologies, and as the content within these systems becomes more diverse, the technology will have to accommodate content that is more difficult to collect and less popular; the critical mass of replicated files will not be reached. Problems with finding a specific item may resurface, but this the time in a decentralized environment.

To borrow a Zen term, metadata is the stuff of card catalogues, tv guides, Rolodexes, taxonomies, and tables of contents - the finger pointing at the sky. Labels such as "title," "author," "kind," "height," and "language" are used to identify a book, a human, a television show, a species, and so on. Simply put, metadata is data regarding data.

There are groups of professionals who have spent years working on - and effectively solving some of - the tough challenges of categorizing, cataloguing, and making items findable. Developers recruited the aid of these computer scientists and architects even in the early days of the Web, knowing that without them, we'd be in a lot of trouble.

The strength of peer-to-peer is its ability to challenge old ideas and reinvent how we do things. This can be helpful, even groundbreaking, but it also has the potential to be highly damaging, as we can discard lessons learned from previous web experiences. We know, for example, that the Web suffered as a result of the late addition of metadata infrastructure.

security-265130_1280.jpg
[credit ]

We couldn't agree on standard descriptive methods - ways of explaining "things" - until the Web arrived on the scene. As a result, the vast majority of web-related resources lack a standard infrastructure for defining and utilizing web content properties. WYSIWYG HTML editors don't make a point of displaying their metadata support (if any), nor do they ask for it.

There isn't much space in search engines for registering metadata and the pages that go with it. Any metadata contained in the form of HTML meta> tags is frequently discarded by robots and spiders. As a consequence, the data collection has become a huge jumble with no rhyme or explanation. The Web isn't the intricately structured work of art that its namesake in nature is.

Early peer-to-peer applications emerged from relatively narrow spheres (MP3 file-sharing, email, Weblogs, groupware, and so on) with well-understood semantics and tacit metadata - we know it's an MP3 because it's on Napster. These groups have the ability to define and codify their semantics until heterogeneity and ubiquity cloud the waters, allowing for improved organizing, extraction, and search functionality down the road. Nonetheless, even at this early point, we're still seeing the same errors.

divider-2461548_1280.png

If you want to help me increase my portfolio value you can donate here:
Bitcoin: 1MkJZXGCN4ExWw8wyKhugtfyv4bgP1sZ8y
Ethereum: 0xa4D140F2F13F1E9a8bfba181219Ba4712EF01061
Litecoin: LUZWhT5Hd5553SDr9zBJqcz1jeXjxFwt23
Thank you for your support!



0
0
0.000
1 comments