Web 3.0

After Web 2.0, it was thought that the next generation in Web technology would be the Semantic Web.  The intent of the Semantic Web, as originally envisioned by Tim Berners-Lee, was to provide meaning, or semantics, to Web content, enabling programs on the Web to “read” and interact with this content in the same way as users (Target).  Data exchange could be performed between programs without explicit programming or human intervention.  The Semantic Web would therefore represent a “Global Brain” where content was processed by programs in the same way as human users, and the interconnected data that constitute the Semantic web would be understood both contextually and conceptually (Silver).

 

Semantics would be imparted to the Web through innovations in artificial intelligence (AI).  AI has many different popular connotations, but in this case, it refers to any system, or intelligent agent, that receives input from its environment and executes actions to perform a task, or achieve a goal, based upon that input.  These intelligent agents range from human beings to thermostats, to biomes.  However, in the present context, intelligent agents refer to computational systems that implement AI through sensing or perceiving their environment.

 

American computer scientist and cognitive scientist John McCarthy (1927 – 2011), an early researcher in the field who coined the term “artificial intelligence” in 1956, defined it as follows (McCarthy, What is AI?, 2004):

 

Q. What is artificial intelligence?

A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

 

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

 

In the context of the Semantic Web, specialized programs – the intelligent agents – would automate the same types of tasks on the Web that are done by human users.  The mechanism by which this could be done is through XML (Extensible Markup Language) on web pages.  The XML would not affect the appearance of the web pages but would encode information that can be read by other programs on the web.  This information, or metadata, would allow specialized web browsers respond to queries, utilizing a large number of webpages (Target).

 

There was a large amount of research and development activity on the Semantic web between 2001 and 2005.  During this time The World Wide Web Consortium (W3C), a large, international forum for the proposal and development of Web standards, worked on many standards that were intended to bring the Semantic Web to realization.  Arguably the most important standard was RDF, or Resource Description Framework, mentioned above in the context of RSS.  RDF was to provide a grammar for the Semantic Web to represent arbitrary forms of knowledge (Target). RDF was envisioned to be the AI component of the Semantic Web.  The grammar essentially consisted of (subject, predicate, object) triplets.  RDF could also be used within JSON and XML.

 

The creation of RDF data for prototyping the Semantic Web and several experimental systems characterized the second phase of development, starting after 2005.  In the third phase, starting in the late 2000s and early 2010s, attempts were made to adopt the standards developed by the W3C, and to incorporate them into the practices of web developers.  An important innovation in the third phase is JSON-LD, which combines features of JSON (highly popular with developers) and RDFa, or RDF in Attributes, a specification for embedding RDF into HTML to allow web browsers and search engines to process web page semantics.  Recall that JSON, or JavaScript Object Notation, is an open standard data interchange format that is extensively used by developers and users.  JSON is human-readable and intuitive.  It is also machine-readable, as it is readily parsed and generated by programs (Introducing JSON).  JSON represents data objects using attribute-value pairs.  JSON, like JavaScript from which it is derived, is lightweight.  Although related to JavaScript, JSON is language independent, and many programming languages can parse and generate JSON code.  These features, along with its relative simplicity and its broad spectrum of functionality, have made it very popular with software developers.  JSON is extensively used to enable communication between web applications.

 

Although a large amount of effort was expended, the Semantic Web was never realized, at least not in the form in which it was originally envisioned.  One difficulty is that the metadata supplied by users may not be reliable or may not be supplied at all.  To be reliable, there would need to be a universally accepted representation for each object, idea, or concept that is to be encoded in metadata, which some observers think would not be achievable, or even desirable (Target).  Another problem is the data representation itself.  The W3C proposed XML as the mechanism by which the Semantic Web would use metadata.  However, XML was resisted in some technical quarters, as the JSON standard was already widely adopted, and the new XML standard was not seen to offer substantial improvements over JSON.   There was also the inherent difficulty in deploying software-controlled “intelligent agents”.  Although RDF was intended to provide AI to the Semantic Web, RDF itself was almost impossible to implement because of the problems in distinguishing similar words and concepts from context (Silver).

As a consequence of these problems, Semantic Web advocates began to see a general-purpose Semantic Web as unworkable and proposed that efforts should be redirected to specific domains, such as medicine and scientific fields (Target).

 

However, the technologies and standards resulting from the development of the Semantic Web have provided benefits to other areas.  For example, Google uses JSON-LD to generate conceptual summaries presented with search results.  The OpenGraph protocol from Facebook allows developers to specify how their web pages will appear when shared on social media.  Additionally, because of Semantic Web technologies, developers can now build applications that use data from many different web sources (Target).

 

Although the Semantic Web does not exist and is no longer an active area of research and development, one of the main philosophies of Web 3.0 is a renewed decentralization, as opposed to the centralization that characterizes Web 2.0 in the form of data exploitation and targeted advertising.  The goal of Web 3.0, by contrast, is a more human-centric online ecosystem.  It is also thought that Web 3.0 may lead to the creation of new business models (Web 3.0: The Next Evolution of the Internet).

 

The proposed Web 3.0 still relies on principles from the Semantic Web, which is part of the former’s underlying model.  Complex associations between web services, users, and data enables a connectivity not directly dependent on keywords and numerical values.  Content on the Internet will therefore become machine-readable, increasing the overall effectiveness of the Web – which was one of the original goals of the Semantic Web (Web 3.0: The Next Evolution of the Internet). Web 3.0, like the Semantic Web, will also make extensive use of AI to provide a more intuitive user experience, in contrast to the current Web 2.0, which relies on direct user input.  AI is also considered to be useful in separating reliable and fraudulent information.  Related to advances in AI technology, extended reality, including virtual reality (VR) and augmented reality (AR) will provide visual (and possibly auditory and tactile) immersive experiences.  As physical objects can be rendered virtually, as computer simulations, and as virtual objects can be rendered physically (e.g. through 3D printing), extended reality will enable new forms of interactions between users and services, products, or data (Web 3.0: The Next Evolution of the Internet).

 

Security concerns that are prevalent in Web 2.0 will also be improved due to the reduced reliance on centralized server systems, which often result in centralized points of failures and security breaches.  Blockchain, a “digital ledger” technology is proposed to increase security and transparency, and therefore to support the goals of Web 3.0 (Silver).  In a blockchain, records, or blocks, are linked together through encryption, resulting in a decentralized, distributed digital record of transactions across many computer systems.   Data will also become more ubiquitous through the evolution of interconnectedness and the Internet of Things, discussed above (Web 3.0: The Next Evolution of the Internet).

 

Web 3.0 is not yet a reality.  The technological infrastructure to support the aims of Web 3.0 is not ready for widespread deployment at present.  Its advocates see vast benefits in incorporating intelligent agents into Web services to provide a more human-centric online ecosystem.  Some observers, however, urge a more cautious approach.  Dartmouth University professor Aden Evens’ ideas concerning the synergistic relationship between Web 2.0 and the discrete, binary code were discussed above.  Evens concludes his discussion of Web 2.0 with a comment on Web 3.0: “Some optimists promise that Web 3.0 will once again recast the epistemological foundations of internet culture by placing the computer or the network in the position vacated by the creative individual. Intelligent machines will process content and not just syntax, reminding users of the value of substance and creativity. Whether this is an enchanting dream or a deepening nightmare will depend on your level of cynicism and your relationship to technology” (Evens, 2012).

[Work Cited]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Contemporary Digital Humanities Copyright © 2022 by Mark P. Wachowiak is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book