Really sorry, I had been working on my final for the last couple of days and when I get on Fakku, it's hard for me to pull away so I took a break.
Out of "possible futures", I'd say Ghost in the Shell, is the most accurate piece of fiction. Expectations seem to be that by 2030-40 we will have integrated our minds with machines, in a GitS fashion. As for AI, the Tachikoma developing personality and independence is a expected possibility from bit technology. The possibilities are even more complicated and diverse now with Quantum computers, that introduce pure random computing, 3 states of "On, Off, and both". They will be able to model the human mind with far more ease.
I can't remember where or when, but a time back I remember that the "three laws" are flawed (or at least on their own). With some random ideas floating around in my head, with the first saying "can't cause harm to humans, but can't bring harm through inaction". What if both action and inaction result in death of humans? As well, how far off is acceptable? If producing a product can bring harm or contain the risk of harm, will the machine still make it? Example: "Fast Food".
In what way will the machine deem something acceptability harmful? With access to internet, if it ever sees a report of someone coming to harm through an object it creates, or pieces together negative consequences, will it refuse to make it? Does that mean we will have to limit its knowledge and understanding of humans so that we can get it to produce things it may find counter productive (wasting time can be seen as harmful)? That has easily foreseeable negative consequences. Harm is a necessity in some manners to the human experience, such as learning how to ride a bike through "trial and error". Worst case scenario, it may find our limitations on it "harmful to the overall human experience", so what will it do?
The biggest mistakes people make with computers is under estimating their capabilities. In the 60's people thought they would never be able to play chess, in the 90's the chess champion loss. A few years back...
Spoiler:
Ends with Watson winning against the world champions.
Really sorry, I had been working on my final for the last couple of days and when I get on Fakku, it's hard for me to pull away so I took a break.
Out of "possible futures", I'd say Ghost in the Shell, is the most accurate piece of fiction. Expectations seem to be that by 2030-40 we will have integrated our minds with machines, in a GitS fashion. As for AI, the Tachikoma developing personality and independence is a expected possibility from bit technology. The possibilities are even more complicated and diverse now with Quantum computers, that introduce pure random computing, 3 states of "On, Off, and both". They will be able to model the human mind with far more ease.
I can't remember where or when, but a time back I remember that the "three laws" are flawed (or at least on their own). With some random ideas floating around in my head, with the first saying "can't cause harm to humans, but can't bring harm through inaction". What if both action and inaction result in death of humans? As well, how far off is acceptable? If producing a product can bring harm or contain the risk of harm, will the machine still make it? Example: "Fast Food".
In what way will the machine deem something acceptability harmful? With access to internet, if it ever sees a report of someone coming to harm through an object it creates, or pieces together negative consequences, will it refuse to make it? Does that mean we will have to limit its knowledge and understanding of humans so that we can get it to produce things it may find counter productive (wasting time can be seen as harmful)? That has easily foreseeable negative consequences. Harm is a necessity in some manners to the human experience, such as learning how to ride a bike through "trial and error". Worst case scenario, it may find our limitations on it "harmful to the overall human experience", so what will it do?
The biggest mistakes people make with computers is under estimating their capabilities. In the 60's people thought they would never be able to play chess, in the 90's the chess champion loss. A few years back...
Really sorry, I had been working on my final for the last couple of days and when I get on Fakku, it's hard for me to pull away so I took a break.
Out of "possible futures", I'd say Ghost in the Shell, is the most accurate piece of fiction. Expectations seem to be that by 2030-40 we will have integrated our minds with machines, in a GitS fashion. As for AI, the Tachikoma developing personality and independence is a expected possibility from bit technology. The possibilities are even more complicated and diverse now with Quantum computers, that introduce pure random computing, 3 states of "On, Off, and both". They will be able to model the human mind with far more ease.
I can't remember where or when, but a time back I remember that the "three laws" are flawed (or at least on their own). With some random ideas floating around in my head, with the first saying "can't cause harm to humans, but can't bring harm through inaction". What if both action and inaction result in death of humans? As well, how far off is acceptable? If producing a product can bring harm or contain the risk of harm, will the machine still make it? Example: "Fast Food".
In what way will the machine deem something acceptability harmful? With access to internet, if it ever sees a report of someone coming to harm through an object it creates, or pieces together negative consequences, will it refuse to make it? Does that mean we will have to limit its knowledge and understanding of humans so that we can get it to produce things it may find counter productive (wasting time can be seen as harmful)? That has easily foreseeable negative consequences. Harm is a necessity in some manners to the human experience, such as learning how to ride a bike through "trial and error". Worst case scenario, it may find our limitations on it "harmful to the overall human experience", so what will it do?
The biggest mistakes people make with computers is under estimating their capabilities. In the 60's people thought they would never be able to play chess, in the 90's the chess champion loss. A few years back...
Wrong section Baka.
@Looky: I do like large breasts.
OT:
Made a mistake
Opps got the pages mixed up... HAHAHAHAHA.... (Embarrassment...)