There is a genre of Romance called simply Futuristic. For the most part, formerly, setting a story in "the" future qualified that story as science fiction. Today, the futuristic romance is a growing genre.
We've been discussing genre at length and depth, and have noted how, over generations, "genre" of all types has very fluid definitions.
My take on "genre" definitions is that the genre identification is more about what is left out than about what is included.
Readers look for another novel to give them the same "feeling" that a previous one did -- and often look for the same setting or time period as a signal that this novel is like that one.
"The" future is another such setting clue often used by readers to choose which book to spend money on. If you are "in the future" then you are not in Regency England or Imperial Rome. Thing is, with futuristic worldbuilding, a writer can indeed include time travel visits to ancient times, and modern interstellar versions of government by aristocracy. So "futuristic" may be as difficult to identify as all science fiction has been.
In science fiction, we worldbuild "a" future for a story by extrapolating trends, either straight line "if this goes on" or in a curve "if only" or "what if?"
But we can't expect to write about "the" future -- only "a" future. The future we choose to create either generates a story, or is generated by a story you want to tell.
Last week, we looked at trends in publishing, and how they swing back and forth regularly. Your story, the story you were born to tell, will fit into a trend somewhere -- your problem as a commercial writer is to identify the current trends and watch carefully, preparing a manuscript to present right when the trend that supports it begins to gather force.
Some trends are so big that we can't see them while sitting inside them.
The Internet and email were such a trend. Everything changed when the concept "browser" was deployed by envisioning the World Wide Web. Before that, Universities were pouring thousands of hours into creating electronic records, books, facts, images, accessible by special and very idiosyncratic decoding software. They even gave such software a woman's name, as Librarians were mainly female.
Then came the idea of standardizing all that coding and accessing it with a piece of software that could read "the" markup language we know today as html (hypertext markup language).
To look behind a web page, right-click your mouse on a blank spot on a web page and choose "view page source" -- the "page source" now includes little program call-outs that tell the server on the machine where the page resides to run a little program to deliver "interactivity" -- so much of the "page source" you can access is just instructions to do things, and you can't see what those things are.
Where will this be in 20 years from now -- a hundred years?
We're already doing a lot of this by voice command. Artificial Intelligence is now considered the next big disruptor and it is ready to rock-n-roll big time.
This year, UPS is testing using drones parked on top of their delivery vans to distribute packages in a neighborhood. The FAA thinks this is a fine idea. Those drones couldn't work without A.I. and other advanced tools that will soon bring you autonomous driving cars.
For maybe 80 years, we've had science fiction stories about A.I. Characters that humans fall in love with. It is starting to seem less grotesque, less of something to resist.
But a lot of folks working on the bleeding edge of A.I. are sounding notes of alarm. A.I. can now be projected to take over most of the jobs work-a-day people make a living at. The only jobs left will require genius level intelligence, and creativity -- and even those are within reach of Artificial Intelligence that can learn and keep learning.
Recently, there was an article about Artificial Intelligence learning to become aggressive, initiating attacks not just responding.
So far, nobody has identified something artificial intelligence can't do that humans can. Every time some human function is defined as uniquely human, some human genius teaches A.I. to do that (or even do it better.)
That is a trend!
We love A.I., we adore artificial intelligence, -- we create artificial intelligence and nurture and adore it as we do our children.
What is really going on here?
How will the human/A.I. interface develop? Will artificial intelligence become a legal person (Heinlein explored that at length, and Star Trek's Character Data gives us many new facets to consider)?
What about Artificial Intelligence Refugees washing ashore, fleeing some sort of cyberwar?
A.I. is being discussed as the solution to cyber-security, being able to sift vast Big Data pools and sort out the one or two major trouble spots (terrorists).
Right now, the entire security industry may be taking itself too seriously (Romantic Comedy is a fabulous genre for tackling this). The I.T. folks at work keep making you change passwords, and berate people for opening emails or plugging in a thumb drive.
Mobile Devices and services now require two-step authentication -- you have to have a smartphone to read your news feed on Yahoo. (well, there is a work-around right now, but that won't last).
The attitude behind the policies of cyber-security gurus is that if you get hacked, it is YOUR FAULT (not the fault of the attacker. Only the victim is to blame in cyber-warfare). You did something wrong. You breached protocol. You opened an email. You visited a website. You put in your personal data (but of course I.T. forces you to identify yourself!)
We are all tangled up in a ball of twine and quite ludicrous about it because we have (in a cultural panic) set aside several time-tested principles of life.
We have done this because the benefits of online communications are bigger than the threats and costs (so far).
Since we can't stop people in other countries attacking us for profit, our "security" folks attack US. They blame the victim of the sucker-punch rather than the immorality of the sucker-puncher, and our own defense (our immune system!) attacks us, forcing us to change our way of doing things because of something someone we don't know did to us.
"Security" works differently if your Identity is known to the Security Officer.
Ask yourself: When was the last time Donald Trump was strip searched for the egregious crime of attempting to enter the White House?
Does Presidential Security torture, torment and beat up on the President?
Then why do the cyber-security I.T. department folks beat up on YOU when you try to access your Cloud account with this or that company? Stop what you're doing (you can't enter the white house)! Identify Yourself! (like they don't know what they are responsible for knowing?) Papers Please! ACCESS DENIED! You have to wait three days to try again.
Where did this come from? What is really going on here? What trends produced this deplorable state of business?
The principles we have abandoned are "don't blame the victim" and "innocent until proven guilty" and "I am who I say I am; if I lie, I will be removed from society, maybe forever."
You shouldn't blame a victim because next thing you know, you will be a victim.
Quality of life is severely infringed on, productivity sliced in half, and happiness beyond reach if you live an entirely DEFENSIVE life in a defensive (curled inward) posture. The H.E.A. ending as we currently envision it can not happen inside a "secure" defense perimeter that punishes you for the deeds of those outside your defense perimeter who are guilty of life destroying behavior.
Logic and reality have long established you can not prove innocence, but you can prove guilt. So we must presume people innocent until we can prove guilt.
Identity is sovereignty -- personal sovereignty is the bedrock of Western Civilization. This dates back to the Magna Carta, probably farther. There's a Biblical quote: "how goodly are your tents, O Jacob." This refers to the camping habit while wandering in the desert where tents were set up so that the entry ways did not face into each other -- giving PRIVACY to the neighbors.
Privacy is the bedrock of personal sovereignty.
You can't DEFEND privacy or security or innocence or Identity, and thus the net result of all these elements, FREEDOM.
Once you surround these elements with "defense" walls, they no longer exist! The very act of DEFENDING obliterates what is to be preserved.
So our entire cyber-security industry is set up backwards.
The ancient Chinese knew this. The best defense is a good offense.
You don't punish your employer (the voters are the employer) for having been attacked by an outsider (non-citizen).
The trend for Romance Futurologists to follow and extrapolate is, "How can we use A.I. to rectify our errors in cyber-security and every other sort of security, national and personal?" How can we use A.I. to reverse the entire I.T. Industry's take on how to "secure" us, given A.I. has now learned to be aggressive. (OK, we "shouldn't" -- but will we? And what if we did?)
What will we try first? What will we try last that actually works? (and who will fall in love with their A.I. protector? What fruit would such a union produce?)
Do we love to do the protecting -- or to be protected?
What's sexy about protection?
"Security" seems to be a word that refers to an absence of risk. Futurologists have to ask whether risk is, itself, sexy?
How much "security" do we need and when do we need it? At what price in productivity? If all human jobs will be un-invented by A.I. servants, do humans have to be "productive" any more?
Will life be one long orgy? Or will we all pick up and move to the stars, letting A.I. have Earth?
What price Freedom?