The "Real Aerospace" Thread
Moderator: Outsider Moderators
- CrimsonFALKE
- Posts: 404
- Joined: Tue Nov 12, 2013 11:31 pm
Re: The "Real Aerospace" Thread
What do we all have to say about fighter planes being entirely drones?
Re: The "Real Aerospace" Thread
Likely to eventually be the case. The general read I am getting is they are still antsy about losing control of them hence a desire to keep interceptors manned and to mainly have relatively slow and easily shot down stuff be unmanned.
Re: The "Real Aerospace" Thread
It will be reliable once aerial combat becomes a solved game, in algorithmic terms.CrimsonFALKE wrote: ↑Sat Jan 08, 2022 3:05 pmWhat do we all have to say about fighter planes being entirely drones?
So it will never become entirely reliable. As long as one man outsmarting another man is a viable option, people will keep flying fighter planes.
Re: The "Real Aerospace" Thread
I think in the near future, low-radar-observability will continue to be a thing, and so air to air engagements can still happen at close range. In dogfights, I think a good human pilot will have an edge over an autonomous fighter for some time to come.
Remotely piloted air to air combat is likely to suffer from input lag and the potential for signal jamming.
As far as I'm aware, both the Air Force and Navy are developing manned sixth-generation fighters. Though I think they're looking at the possibility of having these aircraft also able to operate autonomously. For example, a manned fighter might be leading a pair of autonomous "missile buses."
Remotely piloted air to air combat is likely to suffer from input lag and the potential for signal jamming.
As far as I'm aware, both the Air Force and Navy are developing manned sixth-generation fighters. Though I think they're looking at the possibility of having these aircraft also able to operate autonomously. For example, a manned fighter might be leading a pair of autonomous "missile buses."
Re: The "Real Aerospace" Thread
The question is one of control, autonomy, and responsibility. Existing drones are (mostly) remotely-piloted aircraft. There's a human in the loop, it's just that they're not sitting in the aircraft. But what if the connection between the remote pilot and the aircraft is lost, such as by jamming or by destruction of satellites? When the pilot is inside the aircraft, they can still act. When they aren't, the aircraft either chooses autonomously what it should do, or it ceases to do anything useful (and will probably crash or be shot down easily). And if it chooses autonomously what it should do, then who is responsible for its actions? Let's say your drone commits a war crime. Who is to blame?
Re: The "Real Aerospace" Thread
I think a double approach is more likely.
Send a manned craft escorted by a drone craft.
In the event of an attack the drones will do maneuvers the manned craft cannot and possibly even save the manned craft outright.
However if the manned craft is shot down the drones would simply zoom back to base for input.
Since the manned craft would be providing the 'input'.
A larger manned craft with multiple 'remote' human controllers manning the drone wingmen fighters nearby.
Send a manned craft escorted by a drone craft.
In the event of an attack the drones will do maneuvers the manned craft cannot and possibly even save the manned craft outright.
However if the manned craft is shot down the drones would simply zoom back to base for input.
Since the manned craft would be providing the 'input'.
A larger manned craft with multiple 'remote' human controllers manning the drone wingmen fighters nearby.
Re: The "Real Aerospace" Thread
Generally speaking dogfighting is still considered kindof unlikely, not that it wont happen, but I don't believe that is at the forefront of planners minds. Stealth is a thing but you are still looking at potentially 40-50 nautical miles detection with newer AESA radars. Thats actually about the ideal no-escape engagement evelope for something like AMRAAM. I believe that is expected to extend somewhat eventually, due to the relative ease of upgrading radars at this point vs upgrading the stealth performance which is much harder to modify due to being so integral to the structure.
Minding, the air force has pretty actively expressed an interest in taking an 'evolutionary rather than revolutionary' approach to aircraft design, so there is decent likelihood the f-35 airframe will continue to evolve over time (as it already has to a significant extent). There is a non zero chance that for any given time period the latest f-35 airframe may actually have a reduced detection aspect against its contemporary sensors at that time as compared to the current expected performance against current sensors.
Minding, the air force has pretty actively expressed an interest in taking an 'evolutionary rather than revolutionary' approach to aircraft design, so there is decent likelihood the f-35 airframe will continue to evolve over time (as it already has to a significant extent). There is a non zero chance that for any given time period the latest f-35 airframe may actually have a reduced detection aspect against its contemporary sensors at that time as compared to the current expected performance against current sensors.
Re: The "Real Aerospace" Thread
I've read that the development goes towards the drone swarm approach. With a manned command plane (5th or 6th gen multipurpose jet) at the core, the drone group can be outfitted and mix/matched for a specific task. For example, an aerial superiority mission would need some drones as forward sensor platforms, the majority outfitted with long-range missiles and then a couple with short-range ones for defense.
The drones themselves would have a high grade of automation and independence, but the manned plane in their midst will still direct them and issue orders. This way, jamming the drones should be harder to accomplish.
The drones themselves would have a high grade of automation and independence, but the manned plane in their midst will still direct them and issue orders. This way, jamming the drones should be harder to accomplish.
My fanfic: A sword that wields itself
Re: The "Real Aerospace" Thread
@Arioch: " In dogfights, I think a good human pilot will have an edge over an autonomous fighter for some time to come.
Remotely piloted air to air combat is likely to suffer from input lag and the potential for signal jamming."
We've already passed that milestone:
https://www.insidehook.com/daily_brief/ ... t-dogfight
Remotely piloted air to air combat is likely to suffer from input lag and the potential for signal jamming."
We've already passed that milestone:
https://www.insidehook.com/daily_brief/ ... t-dogfight
Re: The "Real Aerospace" Thread
The problem with machine learning is that it is bound by the criteria you set for it; it may have very poor (or even dangerous) responses for situations that it was not being graded on.
True machine intelligence may be coming sooner than we think, but until then I still think a good human pilot will have the edge. Especially when human pilots have more practice against AI systems.
True machine intelligence may be coming sooner than we think, but until then I still think a good human pilot will have the edge. Especially when human pilots have more practice against AI systems.
- Keklas Rekobah
- Posts: 491
- Joined: Fri Jul 23, 2021 7:54 pm
Re: The "Real Aerospace" Thread
A better option might be to have an A.I. learn from combat logs what the most effective strategy against a particular enemy might be, and have the "Meat Pilots" fly accordingly.
Sort of like having IBM's Watson analyze your next opponent's favorite offensive football plays to determine the best defenses against them (but in a three-dimensional cube 1 light-second across).
Sort of like having IBM's Watson analyze your next opponent's favorite offensive football plays to determine the best defenses against them (but in a three-dimensional cube 1 light-second across).
“Qua is the sine qua non of sine qua non qua sine qua non.” -- Attributed to many
Re: The "Real Aerospace" Thread
Yes and no.
The test focused purely on the dogfighting AI. IIRC it did not incorporate the very important aspect of sensor data interpretation and recognition -- instead it was directly plugged in the flight sim so it could be told "enemy aircraft at these coordinates with this velocity vector", updated frame by frame. While the human pilots had to look at their screen and get their brain to do all the work to interpret what is seen.
Re: The "Real Aerospace" Thread
"The problem with machine learning is that it is bound by the criteria you set for it; it may have very poor (or even dangerous) responses for situations that it was not being graded on."
That's the problem with human learning too (more specifically, human memory).
"IIRC it did not incorporate the very important aspect of sensor data interpretation and recognition"
Seems to me that that particular advantage would carry over to the real world--there is no point, after all, in having one digital device read another digital device using optical means. Any AI is going to experience the world around it using the aircraft's sensors--it isn't a pilot in an aircraft, it is the aircraft.
That's the problem with human learning too (more specifically, human memory).
"IIRC it did not incorporate the very important aspect of sensor data interpretation and recognition"
Seems to me that that particular advantage would carry over to the real world--there is no point, after all, in having one digital device read another digital device using optical means. Any AI is going to experience the world around it using the aircraft's sensors--it isn't a pilot in an aircraft, it is the aircraft.
Re: The "Real Aerospace" Thread
You misunderstand. I am talking about sensor data interpretation, so of course it'd be plugged directly to the radar rather than try to interpret the radar's screen.Demarquis wrote: ↑Mon Jan 10, 2022 9:59 pmSeems to me that that particular advantage would carry over to the real world--there is no point, after all, in having one digital device read another digital device using optical means. Any AI is going to experience the world around it using the aircraft's sensors--it isn't a pilot in an aircraft, it is the aircraft.
But this doesn't change the fact that it'd still be sensor data it has to interpret. Subject to sensors working correctly and their various limitations. Not the same thing as an omniscient knowledge directly granted with perfect accuracy and completeness by the universe itself; which is what happened in the simulation.
Re: The "Real Aerospace" Thread
But the humans must have had access to the exact same information, unless you are implying that the simulation generates information that is not displayed on the screen.
- TerrifyingKitten
- Posts: 13
- Joined: Mon Jan 04, 2021 8:34 am
- Location: about 3 feet behind you.
Re: The "Real Aerospace" Thread
Regardless of the implementation details as long as we are using action/reaction as our ladder up we have to deal with "the rocket equation" which describes the theoretical maximum
https://www.nasa.gov/mission_pages/stat ... yanny.html
We need a new idea. Something like a way to manipulate gravity (change the shape of the gravity well) or one of the fundamental forces. Imagine if the fabric of spacetime was something material to which you could attach. Then you just stand on the night side of the planet and grab on, Earth would jump away from your feet at 60,000 mph. Poof! You're in space for whatever the energy cost of activating your attachment device is. You might need some springs and cushions though because we've not got inertial dampeners. Yet.
Don't do this from midnight to noon though, or you attach to space and Earth smashed up through your feet to your head (at differing angles depending on time of day) at that same speed.
https://www.nasa.gov/mission_pages/stat ... yanny.html
We need a new idea. Something like a way to manipulate gravity (change the shape of the gravity well) or one of the fundamental forces. Imagine if the fabric of spacetime was something material to which you could attach. Then you just stand on the night side of the planet and grab on, Earth would jump away from your feet at 60,000 mph. Poof! You're in space for whatever the energy cost of activating your attachment device is. You might need some springs and cushions though because we've not got inertial dampeners. Yet.
Don't do this from midnight to noon though, or you attach to space and Earth smashed up through your feet to your head (at differing angles depending on time of day) at that same speed.
Re: The "Real Aerospace" Thread
The humans have a screen that shows them what they'd see if they were in an aircraft. They need to interpret that display to turn it into information.
Again, what I am attempting to convey is that in a real-world situation where you have an AI-controlled combat aircraft, the AI would have to understand the real world situation through interpretation of its sensor data. (Radar, optronics, IRST, and whatever else.) There's a big difference between knowing "enemy F-16 at position X, Y, Z with vector a, b, c" and looking at a radar return signal to intterpret it as an enemy F-16 at these position.
Here is one example. Okay, it's student and amateur software, not the cream of the crop of what's available to a superpower's military-industrial complex, but here's a bot deathmatch in old-school Doom where the trick is that the bots have to play the game based on visual information -- they basically see the same thing a human player would see (actually they get access to the depth buffer as well).
SpoilerShow
They are terrible.
Re: The "Real Aerospace" Thread
Ha, they misspelled tyranny!TerrifyingKitten wrote: ↑Wed Jan 12, 2022 7:20 amRegardless of the implementation details as long as we are using action/reaction as our ladder up we have to deal with "the rocket equation" which describes the theoretical maximum
https://www.nasa.gov/mission_pages/stat ... yanny.html
At that speed, you'd indeed go poof, but from the air friction. However, a similar idea was already introduced in H.G. Wells' "First Men in the Moon". By using a gravity-negating material, you could detach yourself from this force, or direct its vector.TerrifyingKitten wrote: ↑Wed Jan 12, 2022 7:20 amWe need a new idea. Something like a way to manipulate gravity (change the shape of the gravity well) or one of the fundamental forces. Imagine if the fabric of spacetime was something material to which you could attach. Then you just stand on the night side of the planet and grab on, Earth would jump away from your feet at 60,000 mph. Poof! You're in space for whatever the energy cost of activating your attachment device is. You might need some springs and cushions though because we've not got inertial dampeners. Yet.
Don't do this from midnight to noon though, or you attach to space and Earth smashed up through your feet to your head (at differing angles depending on time of day) at that same speed.
My fanfic: A sword that wields itself
Re: The "Real Aerospace" Thread
No, actually thats pretty much what the computer has access to. Humans dont manually track or interpret radar returns and haven't done anything particularly resembling that since vietnam-era. The information is way too dense and is not broadly speaking human readable in its raw form. The radar will actually do a lot to classify the aircraft based off of relatively limited returns that cant be replicated by a human, and do a lot to maintain a 'track' of a particular target that might otherwise be various disorganized blips moving around. By the time it gets to the dog fighting software it will actually just be 'enemy f-16 at position X Y Z with estimated velocity vector a, b, c, estimated acceleration vector d,e,f, estimated yaw pitch roll, estimated angular velocity'.gaerzi wrote: ↑Wed Jan 12, 2022 10:25 amThe humans have a screen that shows them what they'd see if they were in an aircraft. They need to interpret that display to turn it into information.
Again, what I am attempting to convey is that in a real-world situation where you have an AI-controlled combat aircraft, the AI would have to understand the real world situation through interpretation of its sensor data. (Radar, optronics, IRST, and whatever else.) There's a big difference between knowing "enemy F-16 at position X, Y, Z with vector a, b, c" and looking at a radar return signal to intterpret it as an enemy F-16 at these position.
Re: The "Real Aerospace" Thread
@TerrifyingKitten: "We need a new idea. Something like a way to manipulate gravity (change the shape of the gravity well) or one of the fundamental forces. Imagine if the fabric of spacetime was something material to which you could attach..."
What you are describing sounds a lot like the so-called "Alcubierre Drive" https://en.wikipedia.org/wiki/Alcubierre_drive , which is hard science.
@QuakeIV: "No, actually thats pretty much what the computer has access to."
Yeah, I was going to say that gaerzi seems to be postulating a simulation package that generates data that isn't displayed on the screen. I can't think of any reason why someone would program software that did that. That's what happens in real life, but in a training simulation? Why? The simulation software is generating the screen display, and that should be all it's generating. Anything else is a waste of processing power.
That Doom playing bot appears to be an example of what I am describing: the bots play the game using the visual information from the screen (which is displaying the game Doom). They would pretty much have to do this, because, while I am not a programmer and do not have access to the developer notes for the game, there isn't any reason for the Doom software to be creating any data that isn't translated into the screen display. Why would it?
Actually, this should translate into an advantage for the AI in the real world, because there it's the human pilots that are depending upon the heavily processed information on their head up displays, which an AI could be reading the raw sensor data in real time. Of course, it could be that the AI essentially incorporates the head up display processing functions into itself, but I don't know how it all works at that level of detail.
What you are describing sounds a lot like the so-called "Alcubierre Drive" https://en.wikipedia.org/wiki/Alcubierre_drive , which is hard science.
@QuakeIV: "No, actually thats pretty much what the computer has access to."
Yeah, I was going to say that gaerzi seems to be postulating a simulation package that generates data that isn't displayed on the screen. I can't think of any reason why someone would program software that did that. That's what happens in real life, but in a training simulation? Why? The simulation software is generating the screen display, and that should be all it's generating. Anything else is a waste of processing power.
That Doom playing bot appears to be an example of what I am describing: the bots play the game using the visual information from the screen (which is displaying the game Doom). They would pretty much have to do this, because, while I am not a programmer and do not have access to the developer notes for the game, there isn't any reason for the Doom software to be creating any data that isn't translated into the screen display. Why would it?
Actually, this should translate into an advantage for the AI in the real world, because there it's the human pilots that are depending upon the heavily processed information on their head up displays, which an AI could be reading the raw sensor data in real time. Of course, it could be that the AI essentially incorporates the head up display processing functions into itself, but I don't know how it all works at that level of detail.