Saturday, 14 April 2018

Virtual Reality Gives Oregon Patients, Doctors Pain Treatment Alternative To Opioids

quote [ Oregon's largest health care providers see both promise and challenges in virtual reality as a non-opioid treatment for pain.  ]

Yay! I got my desktop VR working so now I'm all hyped up on imaginary experiences.

Look! Look!
Cats with detachable buttholes! :D

Cat Sorter VR Playthrough
[SFW] [Virtual & Augmented Reality] [+5 Good]
[by steele@12:19amGMT]

Comments

steele said @ 12:24am GMT on 14th Apr [Score:1 laz0r]
Mythtyn, in regards to the voice recognition stuff we were talking about last time. I found a nifty little experience called Mad Hatter VR that puts you in the place of Alice, Mad Hatter, or Mad Crouch or something. It's still early, but it's where I think we're going to see a lot of experiences heading. It's basically a very early form of the Ractor experiences in Stephenson's Diamond Age in that it feeds you lines in the easy modes until the difficult mode when you have to play the part by memory.
steele said[1] @ 12:51pm GMT on 14th Apr
HoZay, another good one. [Discussion] I played VR Chat for a total of 19 hours yesterday and now I feel like I've taken DMT. I still stand by my stance that this tech in the right hands is going to crack open our understanding of how consciousness and "will" works. It's kinda like dosing the worldwide punch with acid. The right experience can leave people rethinking their place in the universe. Of course all that transcendence didn't stop Steve Jobs from being an utter twat so who knows if this will be a positive thing. But, I'm hopeful.
HoZay said[1] @ 5:23pm GMT on 14th Apr
If it's that disruptive, it will be banned. Acid isn't illegal for being fun, but because society won't work if a large bunch of people are enlightened/see through the facade of reality, etc.
steele said[1] @ 5:37pm GMT on 14th Apr
Blinders. "People have choices! You can't just put on a headset and change their brain!" ;)

Oh! And sure to be my personal favorite!

"I don't use VR so this doesn't affect me." :D
HoZay said @ 5:56pm GMT on 14th Apr
"In the right hands" seems like the hardest part of the spec.
steele said @ 6:25pm GMT on 14th Apr
LOL! No argument there whatsoever. For every one person that wants to help there's probably a hundred K looking to making a buck and inadvertently prop up the system. But! The bright side is that it's not something that can go away. Short of extinction. ;) Our own salvation, for lack of a better word, is built into our brains. We crave that story of a self sacrificing hero. Even if we can't agree on the form that it takes, its altruistic archetype exists within us as a direct byproduct of consciousness. Each person that's born is another roll of the dice that they'll find a character capable of bringing everyone together. And the very communication network the "powers that be" (whomever you want to define them as) create to tighten their hold on all of us also allows for the ability for those hopeful individuals to reach everyone. Even if there are a number of hoops required to pull it off. Really, it'd be like the greatest hack of all time; a benevolent mental botnet.

Queen - One Vision (Lyrics)

I believe one day the avatar will bring Fried Chicken to us all. :D
Taxman said @ 10:15pm GMT on 15th Apr
What is the correct response to “I don’t use VR so this doesn’t affect me”?

Won’t there be anti-VR people just like there are vegans, anti-vaxxers, Amish, etc.?
steele said @ 11:40pm GMT on 15th Apr
Ideally, pointing out to people that an inordinate amount of people around them are using X and thus things that happen within that realm will affect their lives regardless of whether or not they directly partake in X, but we've seen how well that worked out when we tried to point out the issues with facebook. Fact of the matter is, we don't make our decisions or beliefs based on logic, we make them based on subconscious and often emotional criteria. So I'd say best bet is to make people aware of the issues surrounding X as best as one can without tripping their backfire effect and promote positive repercussions of responsible regulation of X while ignoring the naysayers as much as possible.
Taxman said @ 12:15am GMT on 16th Apr
But there was nothing vegans, anti-vaxxers, or the Amish could do about facebook either (as the minorities, not that they were trying). Their combined populations screaming at the top of their lungs would have had the same outcome as we have.

We’re doomed or we’re not, but the majority has already decided. One is either a part of the majority, or in the minority and get to uh... watch?

Not disagreeing with you that the concern needs to get out there, but how does the chicken little story end?
steele said @ 1:03am GMT on 16th Apr
Well there is though, they could get educated about the problem and help spread the word. It's not like I'm talking about vegans and anti-vaxxers here. I'm not talking about fringe groups. I'm talking about your average Joe or Joline that just isn't experienced with X. The fact that we have conversations here on this site where a large response has been "I'm not on facebook so this doesn't affect me." is indicative of a much larger ignorance than just facebook. Something which I have been very, very, very vocal about.

The thing is there's no decisions, it all comes down to those subconscious machinations, how people feel in regards to a topic. The kind of information that a person finds themselves immersed in, like the Overton Window. That's why this whole Sinclair thing and the rest of the mediaopoly is so dangerous. And as far as facebook, their ability to shift the overton window is just the tip of the iceberg.

The chicken little story ends well when the chickens come together against the exploitive farmers and start thinking of themselves as a one global tribe. To quote Joseph Campbell, "If you want to change the world, you have to change the metaphor." IOW we have to change the stories we tell ourselves about our purpose in this world and where we're heading as a species. That probably won't happen without organization, but I think we're starting to see that as economic progressives start making headway.

Or, it doesn't end well and we become the fried chicken.
Taxman said @ 1:33am GMT on 16th Apr
Can you give an example where humanity has been able to successfully do this without the aforementioned terribleness eventually happening and we simply “learn our lesson”?
steele said @ 2:46am GMT on 16th Apr [Score:1 Informative]
Nope. :D

Matter of fact, I'm about 3/4 of the way through The Great Leveler which seeks to, and does a pretty good job of, demonstrating that the seemingly inevitable collapse of inequal societies fall into 4 typical realms; Transformative revolution, natural disasters, mass mobilization warfare, and state collapse. The author makes the argument that ultimately the only effective method of deferring these downfalls are state imposed methods of equalizing society.

But. All that being said, those positive actions that defer those downfalls come from people fighting for a better world despite the odds. Without their efforts before the downfall there are no lessons to be learned, no corrections applied.
steele said @ 12:29am GMT on 3rd May
I finally finished the book, btw. So freaking dry. Also, apparently violence is the inadvertent answer, because violent large scale communism was essentially the most effective long term human solution to inequality. His final words before the appendix were basically that those who seek equality should be careful what they wish for simply because of the total lack of evidence of an effective non-violent solution. But since we're heading down that road anyway.... #Shrugs

Lol, I always knew a peaceful solution was a longshot, but we definitely won't find it if we don't try.
Taxman said @ 2:16am GMT on 3rd May [Score:1 Funny]
I think violence is the easy solution to several problems after humanity has had a long day. Perfect communism would be perfect, but there’s no way to build that Tower of Babel without all of us turning on each other before its completion. I don’t believe in a completely non-violent future because the moment we are all non-violent, anyone CAPABLE of violence becomes king.

I’m putting my two bits on robots to do the violence that’s needed in the future. Not terminator-esque, but rather robots and AI will watch everything. They will catch you conspiring because you sleep and they don’t. All the investigative clues that stop crime today will be solved nigh-instantly simply based on the limitations of a biological brain versus a computational one. A robot will have perfect evidence, zero prejudice, zero pride, and will ALWAYS meet net-equity requirements (which will inadvertently FAVOR the poor!). Arrests will happen at your weakest moment, reducing the violence needed. This is in addition to the robots lack of need to defend themselves like their human counterparts from the before time.

Crime will become inefficient because those that commit it will be caught instantly. Maybe people will find ways around it, but I think that’s putting too much faith in human adaptation.

We’re already on our way there.
steele said @ 2:59am GMT on 3rd May
Have I recommended Weapons of Math Destruction to you yet? It's already a bit outdated tech-wise because of how fast neural networks have taken off, but that makes it more relevant than ever. We're already at a point where AI is so entrenched in the systemic discrimination that really, it's very unlikely things are going to work out with AI being any sort of saviour for us. I think if we're lucky we might see something resembling Post-Scarcity in like a Neal Stephenson Diamond Age sort of thing. Where like the poor will have access to the bare necessities via limited DRM replicator tech access, but it's much more likely we'll end up with an Elysium type situation minus the hunky Matt Damon. See, you're under the impression that the Judicial system is striving for that perfection, but from the outside it seems much more like they prefer the efficiency of just plain shooting people that don't look like them. Unless you imagine some sort of scenario where your LEO cohorts get some sort of blank slate, it seems only likely that whatever LEO AI solution we get will be formed in their image. After all, where do you think we would get the data to train those AIs?

Like, I really cannot emphasize enough how accurate this comic is. When you want to replace something with AI it's basically record as much data as you can, shove it all into your framework, and then mix as needed. Repeat. Our LEO AIs will only ever be as perfect as the humans they're based on. That was one of the things I found so disappointing about the movie Chappie even. Not the movie itself, even if it had issues. But you've got this Robot AI taking the place of police, but not one person I talked to that watched the movie ever questioned why the Robots still had to resort to violence. No one I knew questioned why the cost of robot maintenance was more important than human lives. They just accepted it. These are the kind of hurdles we're facing... and yeah, we are currently unequipped for them. :D
Taxman said @ 1:59pm GMT on 3rd May
I’ll take a look. (Math destruction)

I think what’s being ignored with LE robots is they have no privacy. You can go into their heads and see EXACTLY why they did a thing. Why they did a thing can be posted in the newspaper for discussion. Then it can be edited as society deems fit.

When you’re programming them, you’re not going to say “watch these guys/gals” and mimic them. Procedure is going to be written into them. Boring boring boring procedure. Go ahead, try to program racial disparity in code. Make a variable called WinkWinkNudgeNudge.MelaninContent and see how long before the Robotics Inspector General finds out about it.

Take the Starbucks situation recently where two black men were arrested. Now have robot LE show up (Chappie style bots)

Engineer: Robot officer R3-FR092, explain why you did not arrest those two black men when you responded to that 911 distress call.

Robot: No authorization for arrest. Officer arrived at scene. Situation ascertained. No threat detected. No crime detected. Citizen input received “We’re waiting for someone”. Establishment management located. Approaching. Initiate protocol de-escalation. Output analyzed, executing. “Greetings sir/miss, distress call received, officer R3-FR092 reporting. No criminal activity detected.” Initiate playback. “We’re waiting for someone.” is the reason suspects have presented for loitering. Establishment is currently open, no violations detected. How may I help you?”

Engineer: They asked for them to be removed. Why didn’t you remove them?

Robot: Several loiterers detected. Individuals being isolated for enforcement action. Possible racial animus detected. Possible crime logged for regulatory review. No crime detected. Interrupt process. Subject identified as “friend” has joined suspect party. Commerce initiated. Officer R3-FR092 checking with management to offer any further assistance. Closing case file 19384635, leaving premises. Officer note: “Huge success.”

Now obviously this is an exaggeration, but I think it’s within the realm of reason.
steele said @ 2:07pm GMT on 3rd May
I uh don't think you understand how modern AI works.

You can't see shit, that's actually what the book is about. :D

Neural nets are essentially black boxes. You can see the logs of what was done, but you can't really explain why it was done without tearing apart the training material and even then you would be hardpressed for a legit answer that other people could understand.
Taxman said @ 3:10pm GMT on 3rd May
I will admit I am lacking in neural network knowledge including AI.

I was riffing off your Elysium and Chappie situations. (Predicted robotic outcomes)

Officers are run through scenarios over and over again. AI would be done the same before releasing to make sure the expected output is done. What shouldn’t happen is racial bias, fear of life, emotional prejudice, etc creeping in. Officers are not expendable. Robots are. This can be programmed.

We may arm robots so that they can stop potential threats (to protect others), but there would never be a reason to have them protect themselves over human life with deadly force.
steele said @ 5:12pm GMT on 3rd May
Can I come in live in your timeline? Because the gilded age I'm witnessing has a bit of a trigger happy officer problem.

Officers are not expendable.

I actually think they should be. It would solve a lot of problems. It's strange how we scream, "Thank you for your sacrifice!" To the people we send off to die and/or murder foreign people, but when it comes to the people we think are supposed to protect us and enforce our society's ideals... We're kinda... "Meh, dude has to protect himself from that ten year old boy playing in the park with a water gun." This might be just me, but I've noticed we haven't quite mastered the ability to enforce the constitutional right to a fair and speedy trial for dead people. It seems like there's a loophole of some sort there that needs closing.
Taxman said @ 1:09am GMT on 4th May
Officers are not expendable, in context, read against robots. Similar to how black lives matters does NOT insinuate white or other lives don’t matter, in context.

I actually think they should be.

What a terrible thing to say. Everyone. EVERYONE is allowed to make mistakes. I haven’t heard you say a nice thing about the 99.9% of the good things you appreciate offficers doing. I can point out the impossible jobs locals are given. Be ON all the time. Never break. If you break, all will be judged.

I understand it’s an important duty; stoping you from killing each other, robbing each other, cheating each other. We will fail. We apologize. Accept our failings, as we accept yours.

We are not gods, though authority might make us seem so. We are you. Clad in armor that does not save all. Wielding weapons that dragons wield greater. We try as heros might, we die as fools do. And the only thing to remember us are people who said “maybe we should be expendable”.
steele said[1] @ 1:26am GMT on 4th May
You didn't read that with the intention I wrote and I do apologize if I was unclear. I don't mean officers should expendable in that i don't care about them as people. I'm saying I don't think they can do their percieved job as protectors of society if they think their life should come first. I believe, in an ideal society, you put on that badge and your life is forfeit to the ideals of that society. That should be the job. Innocent till proven guilty doesn't work if you can get gunned down for being in the wrong place, wrong time. Hence the comparison to soldiers and the way "we" treat them for doing the opposite of what we expect from police.

I don't expect you to agree with it, I just wanted it to be clear.
Taxman said @ 1:54am GMT on 4th May
This my argument for robots that are unfeeling, uncaring, and when they fail are simply troubleshooted (troubleshot?).

Again, everyone in every other line of work is allowed to fail from time to time. Doctors fail, people die. Firefighters fail, people die. Soldiers fail, people die. Politicians fail, people die.

Officers die failing. Maybe they ‘had it coming’ for failing to uphold “innocent until proven guilty”. I don’t know, I didn’t read the article, but fuck the police!

Seriously, I kid. We should troubleshoot this problem, as we do all problems. I’m not happy, just like you aren’t happy. I’m suggesting we replace an unsolvable problem with one that CAN be programmed. In the meantime, accept the 7% efficient democracy you deserve. It’s going to fail. Let’s not make the people willing to use violence on your behalf think you don’t give a shit when they die. :-)
steele said @ 2:48am GMT on 4th May
See, I think we may disagree too much on the benevolence of authority figures.

I don't think they're willing to use violence on MY behalf. I think they're using violence to uphold the status quo. That's why throughout history you'll often find law enforcement on the side opposite of the moral causes. As I said, it's a perceived job of protection, but their actual job is the protection of the state authority, not morals. You know, tax collection, shit like that. ;)

But I digress ;) Really you gotta read that book, and you also may want to check out some of Siraj Raval's videos on machine learning. His 'Learn X in Y Minutes' videos give a decent high level overview of what neural nets are capable of where you can kind of ignore the code and just watch what he accomplishes. But ultimately it all comes down to that comic; Input at one end, output at the other, mix until you get something that works. And as we're quickly finding out our data is all tainted by humanity. It's racist, it's sexist, it's straight up discriminatory... because we have a tendency to be dicks to each other. :P
Taxman said @ 12:46pm GMT on 4th May [Score:1 Good]
I try to take people at their word, intent, etc until they show otherwise. Officers are as benevolent as the next profession. Hospital doctors can end up working ridiculous hours on impossible problems all for a paycheck, and I like to think, for the benefit of other people. There’s a huge potential for abuse, but even if you found 1, 2, or 100 bad doctors, you wouldn’t paint the entire profession as abusive and acting in bad faith.

I think we’re all trying to help the state until we’re not. To say officers have always been on the wrong side of moral causes is like saying black people always vote democratic because of their race. It takes away the officers agency throughout history. There were cops that refused to assist or outright quit during the civil rights movement. You can’t note an absence. The fact that the state was able to find enough officers still willing to beat on African Americans? Well, like you said, humanity has a tendency to be dicks to each other, and our cup overfloweth. :P

Still shouldn’t paint the profession as a whole. It’s made up of people doing their best with the morals they have.

I will try to read that book (math destruction). I don’t have as much time to read a physical book, but I sometimes have activities that let me have an audible book on in the background or my earpiece.

Post a comment
[note: if you are replying to a specific comment, then click the reply link on that comment instead]

You must be logged in to comment on posts.



Posts of Import
Karma
SE v2 Closed BETA
First Post
Subscriptions and Things

Karma Rankings
ScoobySnacks
HoZay
Paracetamol
lilmookieesquire
Ankylosaur