Welcome, Guest. Please login or register.
May 14, 2024, 05:47:09 PM

Login with username, password and session length

Search:     Advanced search
we're back, baby
*
Home Help Search Login Register
f13.net  |  f13.net General Forums  |  General Discussion  |  Topic: Nifty Tech Thread 0 Members and 1 Guest are viewing this topic.
Pages: [1] Go Down Print
Author Topic: Nifty Tech Thread  (Read 5731 times)
Merusk
Terracotta Army
Posts: 27449

Badge Whore


on: October 08, 2014, 08:54:14 AM

I don't know that we have anything like this. I certainly don't remember seeing it, and this certainly doesn't deserve its own topic.

Adobe released a 'vision' video yesterday outlining a part of their new direction.  I found it interesting, given the fairly public spat between Adobe and Apple, that they explicitly said "MICROSOFT" devices in the title, not "mobile & tablet"

Video:"The future of Adobe creative applications on Microsoft Devices"
https://www.youtube.com/watch?v=PlLR9ANGsOo

Some pretty nifty shit in there, if they can ever actually make it work like that.  Our graphics team will be pissed if we're going to have to switch them to Surface and PCs instead of Macbook Pros and iPads, though.  Not to mention the execs who love carrying them as status symbols.  awesome, for real

Still, some pretty neat things there. The pouring from the mobile "palette" was really sweet.


The past cannot be changed. The future is yet within your power.
Sky
Terracotta Army
Posts: 32117

I love my TV an' hug my TV an' call it 'George'.


Reply #1 on: October 08, 2014, 11:27:52 AM

I'd like them to hit Android. I was so excited when I got my tablet to tether my Canon to it with Lightroom....iOS pffft.
schild
Administrator
Posts: 60345


WWW
Reply #2 on: October 08, 2014, 12:33:17 PM

1,000% of time Adobe stuff never works as well as it does in their fucking tech videos. They're worse than any game company trying to pass pre-rendered garbage as game.
Merusk
Terracotta Army
Posts: 27449

Badge Whore


Reply #3 on: October 08, 2014, 01:19:46 PM

You are, of course, 1000% correct. I keep hoping some company will come along with proper Enterprise support*, better stability and a functioning brain on the technical side** and eat their lunch.  I don't see it happening, though. Too much brand loyalty and their "let the kids steal it so they only know this platform, then sue anyone actually using it professionally" strategy is working out just fine for them.  At least they're working on bringing some cohesion to the interfaces of the products so they all work similarly.

There's still some great ideas there, though.

* Hey, let's roll out an enterprise cloud solution but not have ANY way to manage users. That sounds splendid. Fuck you, Adobe.
** Seriously, why do you overwrite the same file instead of creating a temp file and writing to that, assholes. Saying, "Well, don't work off a network," is unacceptable you goddamn hacks.

The past cannot be changed. The future is yet within your power.
HaemishM
Staff Emeritus
Posts: 42632

the Confederate flag underneath the stone in my class ring


WWW
Reply #4 on: October 09, 2014, 08:36:55 AM

Just about any Adobe product that is new and is not Photoshop, Dreamweaver, Illustrator or InDesign is bound to be a complete clusterfuck that will be discontinued in a few years for a new product brand to do the same thing badly. I'm still amazed InDesign got adopted so widely for print over QuarkExpress but I guess they managed to get something right.

MrHat
Terracotta Army
Posts: 7432

Out of the frying pan, into the fire.


Reply #5 on: October 09, 2014, 08:43:52 AM

It looked like there was some CPU slow down in their tech preview video which always bothers me.

Some nifty ideas, but a lot of those ideas have been around for years but just haven't been shown in a slick video (SIGGRAPH has a ton of stuff thrown around in documents).
Goreschach
Terracotta Army
Posts: 1546


Reply #6 on: October 09, 2014, 10:06:58 AM

90% of that shit was just stupid applewank that wouldn't ever be fucking used in actual production.
Venkman
Terracotta Army
Posts: 11536


Reply #7 on: October 09, 2014, 07:32:36 PM

That. Adobe needs a suitor and Microsoft needs some real software they didn't create themselves. I could see an Adobe/MS relationship turning into a thing. But it won't turn Mac users into Surface users any more than it'll turn them into Windows users. All that touchscreen/gesture stuff is wank-only for commercials. None of it plays a part in any real design workflow that I've ever seen, and only occasionally on the presentation side when some PR wonk wants to impress a crowd (and even then it barely works consistently). Designers want fidelity. They won't use a LeapMotion toy to hope they can position a chair in a room when their Cintiq is sitting right there.

And I'm surprised Dreamweaver is on anyone's list. Anyone still use that in any real workflow? I thought it was to just publish HTML code for static pages like we used to do in the late 90s back when Macromedia was a thing...
« Last Edit: October 09, 2014, 07:35:39 PM by Darniaq »
Merusk
Terracotta Army
Posts: 27449

Badge Whore


Reply #8 on: October 10, 2014, 05:12:29 AM

90% of that shit was just stupid applewank that wouldn't ever be fucking used in actual production.

At the very least you're wrong about the part with the furniture.  We'd use that shit every day, and I've been investigating things like the http://structure.io/ for everyday use.  We do interior design and architecture for retail, reality capture and design management is a huge thing, as is creating models to share and manipulate concepts very fast.

The guy who sells his paintings was also interested in the segment with the paint splatters.

All that touchscreen/gesture stuff is wank-only for commercials. None of it plays a part in any real design workflow that I've ever seen, and only occasionally on the presentation side when some PR wonk wants to impress a crowd (and even then it barely works consistently). Designers want fidelity. They won't use a LeapMotion toy to hope they can position a chair in a room when their Cintiq is sitting right there.

Depends on the designer. They really want simplicity. Touch & gesture is easier to pick-up and teach than a Cintiq and software. Hell, getting a designer to learn new software is worse than teaching your kids how to drive. Trying to convert this office from AutoCAD to Revit is at least 3 years overdue but it's going to take at least 3 more to get even half of them quasi-proficient.

Quote
And I'm surprised Dreamweaver is on anyone's list. Anyone still use that in any real workflow? I thought it was to just publish HTML code for static pages like we used to do in the late 90s back when Macromedia was a thing...

Yeah, that's the reaction everyone has when they realize it's packaged into Creative Cloud. "Who uses this still?"

The past cannot be changed. The future is yet within your power.
Salamok
Terracotta Army
Posts: 2803


Reply #9 on: October 10, 2014, 06:01:28 AM

Quote
And I'm surprised Dreamweaver is on anyone's list. Anyone still use that in any real workflow? I thought it was to just publish HTML code for static pages like we used to do in the late 90s back when Macromedia was a thing...

I do!  For single file one off edits anyhow, phpStorm sucks for non project based stuff and Sublime doesn't have ftp built in.  I don't use Dreamweaver as my main editor but I do occasionally use it.  

edit - I also use VIM, gedit and old school notepad on a semi-regular basis, so maybe I'm just an editor whore who is willing to grab anything laying around.

edit2 - For the record Dreamweaver isn't as bad as most people think, it's most frustrating feature is how close it is to being a great editor while at the same time being a barely acceptable editor.  I mean really who the fuck thinks hey we will build in support for distributed version control cause that is the new hotness then hardlocks that support to a specific version of the version control.  Cause you know why would anyone ever update their version control software or hey maybe finding the 18 month old version of it that works is fun.
« Last Edit: October 10, 2014, 06:10:41 AM by Salamok »
Lantyssa
Terracotta Army
Posts: 20848


Reply #10 on: October 10, 2014, 07:26:24 AM

I swore off Dreamweaver when it changed a five line page into a thousand line monstrosity.  Granted it was an early version and probably has setting to prevent that, but don't fuck with my code, man.

I just use Notepad++.  Not like I'm anything but a hobbiest these days.

Hahahaha!  I'm really good at this!
HaemishM
Staff Emeritus
Posts: 42632

the Confederate flag underneath the stone in my class ring


WWW
Reply #11 on: October 10, 2014, 09:23:04 AM

And I'm surprised Dreamweaver is on anyone's list. Anyone still use that in any real workflow? I thought it was to just publish HTML code for static pages like we used to do in the late 90s back when Macromedia was a thing...

I use Dreamweaver pretty extensively for HTML template creation. It has one of the best code editors and I tend to work in code when I'm setting up the layout of a site. Nobody's shown me any product that makes me think I should stop using it. Macaw looked interesting but more for people who don't know how to layout things properly with HTML/CSS.

Venkman
Terracotta Army
Posts: 11536


Reply #12 on: October 13, 2014, 02:13:03 PM

Huh interesting. I haven't seen it used in so long I just assumed it was only used on old projects. Like those pour guys who need to maintain some IE6 web apps  Oh ho ho ho. Reallllly?

All that touchscreen/gesture stuff is wank-only for commercials. None of it plays a part in any real design workflow that I've ever seen, and only occasionally on the presentation side when some PR wonk wants to impress a crowd (and even then it barely works consistently). Designers want fidelity. They won't use a LeapMotion toy to hope they can position a chair in a room when their Cintiq is sitting right there.

Depends on the designer. They really want simplicity. Touch & gesture is easier to pick-up and teach than a Cintiq and software. Hell, getting a designer to learn new software is worse than teaching your kids how to drive. Trying to convert this office from AutoCAD to Revit is at least 3 years overdue but it's going to take at least 3 more to get even half of them quasi-proficient.
Well, there's simplicity and there's control. Let's shelve this conversation until someone's created a full design suite for the Oculus Rift smiley There'll be resistance sure. In any given year you've got experts perfectly comfortable with their mastery of tools they once had to learn because the masters of that day didn't want to. But the Rift could get us closer to a truly new user experience in ways we cannot until we move beyond keyboard, mouse and monitor.

And I'm not sure the Rift is it. We've had scores of differnet input technologies come and go over the last 35 years. The one consistency among them all is that KBM+monitor is still the dominant method
Count Nerfedalot
Terracotta Army
Posts: 1041


Reply #13 on: October 13, 2014, 07:56:46 PM

er, the problem is, the Oculus Rift is display tech, not input tech.   Oh ho ho ho. Reallllly?

Personally, I suspect the next useful steps up in input will be eye tracking replacing the mouse, and, down the road a ways, direct mental replacing keyboard.  And if you think voice will/has come first, yes and no. Voice is a novelty or at best only a partial solution with huge limitations due to accents, dialects, background noise, privacy issues and the sheer absurdity of trying to have a dozen people in the same room all talking at once, not to mention how much slower it is than keyboard, and how worthless it is for pointing.  Eye tracking could relatively easily replace the mouse though. But nothing short of mental commands will beat the keyboard until AI has developed to the point of understanding what we want before we do AND/OR being better at all the tasks that require text input, like coding, communicating anything we want to be private, etc.  Unless maybe somebody invents the Cone of Silence.  why so serious?

Yes, I know I'm paranoid, but am I paranoid enough?
Samwise
Moderator
Posts: 19229

sentient yeast infection


WWW
Reply #14 on: October 13, 2014, 09:24:19 PM

er, the problem is, the Oculus Rift is display tech, not input tech.   Oh ho ho ho. Reallllly?

Technically the head tracking counts as input.  One of their tech demos is a game that's controlled solely via head movement.  I suspect it may just be an evil scheme to drum up business for chiropractors. 

"I have not actually recommended many games, and I'll go on the record here saying my track record is probably best in the industry." - schild
Merusk
Terracotta Army
Posts: 27449

Badge Whore


Reply #15 on: October 14, 2014, 06:25:50 AM

er, the problem is, the Oculus Rift is display tech, not input tech.   Oh ho ho ho. Reallllly?

Personally, I suspect the next useful steps up in input will be eye tracking replacing the mouse, and, down the road a ways, direct mental replacing keyboard.  And if you think voice will/has come first, yes and no. Voice is a novelty or at best only a partial solution with huge limitations due to accents, dialects, background noise, privacy issues and the sheer absurdity of trying to have a dozen people in the same room all talking at once, not to mention how much slower it is than keyboard, and how worthless it is for pointing.  Eye tracking could relatively easily replace the mouse though. But nothing short of mental commands will beat the keyboard until AI has developed to the point of understanding what we want before we do AND/OR being better at all the tasks that require text input, like coding, communicating anything we want to be private, etc.  Unless maybe somebody invents the Cone of Silence.  why so serious?

Nothing will beat the keyboard for text input, I agree. However, for manipulation of models and modeling, touchscreen interface could beat it and mouse very quickly. It's more intuitive to manipulate objects via our hands than an interface tool.

However none of the programs have gone a really intuitive route with it that I've seen. Even those that try to integrate Cintiq tablets still lean heavily on keystroke input for a lot of information. It seems like UI designers only talk to and display to other tech geeks.  So, yeah, never.   awesome, for real

The past cannot be changed. The future is yet within your power.
Goreschach
Terracotta Army
Posts: 1546


Reply #16 on: October 14, 2014, 09:49:45 AM

er, the problem is, the Oculus Rift is display tech, not input tech.   Oh ho ho ho. Reallllly?

Personally, I suspect the next useful steps up in input will be eye tracking replacing the mouse, and, down the road a ways, direct mental replacing keyboard.  And if you think voice will/has come first, yes and no. Voice is a novelty or at best only a partial solution with huge limitations due to accents, dialects, background noise, privacy issues and the sheer absurdity of trying to have a dozen people in the same room all talking at once, not to mention how much slower it is than keyboard, and how worthless it is for pointing.  Eye tracking could relatively easily replace the mouse though. But nothing short of mental commands will beat the keyboard until AI has developed to the point of understanding what we want before we do AND/OR being better at all the tasks that require text input, like coding, communicating anything we want to be private, etc.  Unless maybe somebody invents the Cone of Silence.  why so serious?

Nothing will beat the keyboard for text input, I agree. However, for manipulation of models and modeling, touchscreen interface could beat it and mouse very quickly. It's more intuitive to manipulate objects via our hands than an interface tool.

However none of the programs have gone a really intuitive route with it that I've seen. Even those that try to integrate Cintiq tablets still lean heavily on keystroke input for a lot of information. It seems like UI designers only talk to and display to other tech geeks.  So, yeah, never.   awesome, for real

I was going to show how stupid this idea is by posting a screenshot of blender's hotkey interface settings and making a snarky comment about you waggling that shit with ASL, but it would have taken too long to stitch all those printscreens together because blender literally has like over a thousand key commands.
Merusk
Terracotta Army
Posts: 27449

Badge Whore


Reply #17 on: October 14, 2014, 10:00:17 AM

Because you're working in a UI developed for that sort of interface. It requires a complete rethink along the lines of "if I touch this object, what options become available to manipulate it or select additional parts" vs. "how do I map CTRL+T  or SHIFT-UPARROW to a gesture"  Two totally different things.

http://download.blender.org/documentation/BlenderHotkeyReference.pdf


The past cannot be changed. The future is yet within your power.
Goreschach
Terracotta Army
Posts: 1546


Reply #18 on: October 14, 2014, 10:53:03 AM

That reference is an extremely minor subset of all the hotkeys in blender. Like I said, there's over a thousand. You're completely missing the point. At any given time, for any given selection, there are literally dozens of possible operations you can perform. You can't waggle that shit without some incredibly large and convoluted set of gestures. It's both easier and faster to just press a button. This is why the standard procedure for production software is to have one hand on the mouse/stylus and the other hand on the keyboard. It's faster and more accurate and you don't have to shift your hands around every time you want to do something different.
HaemishM
Staff Emeritus
Posts: 42632

the Confederate flag underneath the stone in my class ring


WWW
Reply #19 on: October 14, 2014, 01:15:19 PM

Huh interesting. I haven't seen it used in so long I just assumed it was only used on old projects. Like those pour guys who need to maintain some IE6 web apps  Oh ho ho ho. Reallllly?

If they aren't using Dreamweaver to code HTML, what the hell are they using?

And I often have to make sure my web pages work on IE8 and sometimes even IE7. Yes, it is that bad.

Venkman
Terracotta Army
Posts: 11536


Reply #20 on: October 14, 2014, 05:18:16 PM

er, the problem is, the Oculus Rift is display tech, not input tech.   Oh ho ho ho. Reallllly?

Eh, point. I was thinking the one time I played with a Rift, but that was when I was also wearing a haptic glove. Given a pair of those, something like Tango and hardware to support faster feedback than I can feel, then maybe we'll see some fundamentally new UIs.

Until then, as Goreschach subsequently pointed out, there isn't some magic solution to simplifying UI, even for easy programs, much less complicated ones.

This isn't some new point either. Photoshop is the way it is because even though most people only use parts of it, there are many groups of users using different parts. CAD programs are like that. All of MS Office is like that. Shit, MMOs are like that. Just compare heavily modded WoW which handles context terribly (as in, almost not at all aside from users customizing their own UI) vs GW2 which designed it's entire hotbar around the contextualization.

All of this is because KBM+monitor is the lowest common denominator. Every PC has a full keyboard, some cursor movement thingy and a screen. Don't need to optimize for it.

Consoles had to solve for some of this, but even there the controllers are getting kinda stupid. I swear the PS4 has so many buttons and combination options you COULD play WoW on it. But anyway, in principal most games have far fewer keys a player can easily remember to access at any given time, no on screen cursor (for the most part), and lower resolution displays looked at from 10+ feet away. All sorts of UI choices are different when designing for that use case.

But that's purpose built hardware. Who's using Photoshop on an Xbox? smiley (and yes I'm sure someone can dig up some stupid video, but doing it technically is not doing it for real).
Salamok
Terracotta Army
Posts: 2803


Reply #21 on: October 15, 2014, 11:19:41 AM

If they aren't using Dreamweaver to code HTML, what the hell are they using?

And I often have to make sure my web pages work on IE8 and sometimes even IE7. Yes, it is that bad.
AFAIK Dreamweaver does nothing to support additional ie backwards compatibility.  I would guess that Sublime Text is the current popular choice for editing HTML+CSS+Javascript, coming from Dreamweaver you will be missing built in ftp/sftp though.  Sublime doesn't have a ton of wow factor out of the box either.  A large portion of folks are just going to use the editor that best supports their chosen server side language though.  For professional php programmers using phpStorm isn't as much of a given as using Visual Studio is for .Net folks but it is still probably the most popular.
Count Nerfedalot
Terracotta Army
Posts: 1041


Reply #22 on: October 15, 2014, 08:50:02 PM

Sculpting seems like it might be cool with gloves, but accuracy and fatigue are huge issues. That and the artist needing to train a whole new set of muscles vs mouse/tablet work. Also cool might be 3D painting/texturing with some kind of glove/VR interface. Use one hand w/glove to manipulate the object, the other hand with either glove or stylus to carve, mold, paint, spray or whatever.  On the other hand it's pretty darn hard to match much less beat mouse/trackball accuracy vs everything else, and the cursor (usually) will hold still if you release the mouse!

For any other content creation, the keyboard + something will continue to rule for a while. I just can't imagine trying to code or do any kind of text entry/editing, number crunching, etc with voice or gloves or anything but keyboard or mental commands, and I suspect even the mental interface, when it comes, will take more training than they keyboard did, at least at first.  But the mouse/trackball may go. I hate having to take my hands off the keyboard to mouse as it is, so I'm ready to replace the mouse with eye tracking at least.  Pen/stylus might have an edge with mathematical formulae entry, but that's a pretty darn small niche of users and far slower for general text entry.

Siri is convenient for not needing a keyboard in the car or restaurant or whatever, but even for casual information and content retrieval of the sort Siri is best suited for, if I'm at my desk I can Google with the KB+Mouse far faster, more reliably, and with much more context, nuance, and ease of honing in on the desired answer which may not be exactly what I asked at first.  And again, voice absolutely sucks as an input method except in the most limited and casual contexts.

Long term, I wonder if direct mental input will result in humans developing new symbolic languages for thinking with/to computers in, or produce an advantage for those who already think and communicate in more symbolic languages? Imagine the advantage of using mental input to communicate using Chinese ideograms instead of having to spell out words?  Then you have to wonder how would such a language continue to evolve and adapt to new terminology and concepts? Would the computers be in a better position to negotiate among themselves to come up with a new symbol/word and teach that to us?  Might they even, at that point, start coming up with new ideas of their own?

Yes, I know I'm paranoid, but am I paranoid enough?
Salamok
Terracotta Army
Posts: 2803


Reply #23 on: October 16, 2014, 10:45:17 AM

Sculpting seems like it might be cool with gloves, but accuracy and fatigue are huge issues. That and the artist needing to train a whole new set of muscles vs mouse/tablet work. Also cool might be 3D painting/texturing with some kind of glove/VR interface. Use one hand w/glove to manipulate the object, the other hand with either glove or stylus to carve, mold, paint, spray or whatever.  On the other hand it's pretty darn hard to match much less beat mouse/trackball accuracy vs everything else, and the cursor (usually) will hold still if you release the mouse!

For any other content creation, the keyboard + something will continue to rule for a while. I just can't imagine trying to code or do any kind of text entry/editing, number crunching, etc with voice or gloves or anything but keyboard or mental commands, and I suspect even the mental interface, when it comes, will take more training than they keyboard did, at least at first.  But the mouse/trackball may go. I hate having to take my hands off the keyboard to mouse as it is, so I'm ready to replace the mouse with eye tracking at least.  Pen/stylus might have an edge with mathematical formulae entry, but that's a pretty darn small niche of users and far slower for general text entry.

Siri is convenient for not needing a keyboard in the car or restaurant or whatever, but even for casual information and content retrieval of the sort Siri is best suited for, if I'm at my desk I can Google with the KB+Mouse far faster, more reliably, and with much more context, nuance, and ease of honing in on the desired answer which may not be exactly what I asked at first.  And again, voice absolutely sucks as an input method except in the most limited and casual contexts.

Long term, I wonder if direct mental input will result in humans developing new symbolic languages for thinking with/to computers in, or produce an advantage for those who already think and communicate in more symbolic languages? Imagine the advantage of using mental input to communicate using Chinese ideograms instead of having to spell out words?  Then you have to wonder how would such a language continue to evolve and adapt to new terminology and concepts? Would the computers be in a better position to negotiate among themselves to come up with a new symbol/word and teach that to us?  Might they even, at that point, start coming up with new ideas of their own?

I think for coding the next step will be VR for more screen real estate.  Given that the industry keeps churning out higher and higher resolution displays and it isn't uncommon for a developer to have a $1000+ worth of monitors on their desk it is an economic solution as well.  Being immersive it will probably also help with focus.

If eye tracking was extremely precise then it would go a very long way to supplementing limited direct mental input into something very useful, look at a start point on the page and mentally click the mouse, look at another point repeat the mental mouse click and you have just selected a block of text faster than you could by hand.  So if all we had was good eye tracking and the ability to issue one mental command (a mouse click) it would still be a huge improvement over the current methods for selecting text.
Count Nerfedalot
Terracotta Army
Posts: 1041


Reply #24 on: October 16, 2014, 07:03:39 PM

That's exactly what I was talking about, for replacing the mouse. they got real monkey's playing video games with mind control, surely us code monkeys can learn to "click" with our brains to equate to a mouse. click-and-hold, right-click, etc equivalents shouldn't be too much harder.  of course if it's tied to eye-tracking, we'll have to do it without twitching or wincing or blinking or such, which might be harder! LOL

Yes, I know I'm paranoid, but am I paranoid enough?
MahrinSkel
Terracotta Army
Posts: 10858

When she crossed over, she was just a ship. But when she came back... she was bullshit!


Reply #25 on: October 16, 2014, 10:51:17 PM

"Why did you nerf Paladins?"

"Programmer sneezed while looking at the combat code."

--Dave

--Signature Unclear
Venkman
Terracotta Army
Posts: 11536


Reply #26 on: October 17, 2014, 04:21:00 PM

"Just use algorithms to get around that!" Literally heard three times this week, by people who don't know what algorithms are in response to technical topics they have no business talking near, much less about.

Anyway...

Long term, I wonder if direct mental input will result in humans developing new symbolic languages for thinking with/to computers in, or produce an advantage for those who already think and communicate in more symbolic languages? Imagine the advantage of using mental input to communicate using Chinese ideograms instead of having to spell out words?
I feel like we're all headed either to the Holodeck or Matrix for interface, but I wonder if it'll be about replacing the world or just overlaying it.

As for KB+M, for as long as we continue to use words to express things, typing seems to be the fastest. Handwriting is too slow, and while speech could be faster, it's still gated by needing to re-read it for errors. Imagine coding in SIRI smiley It'll only be faster when it's as easy as "Holodeck, using all known historical references, give Data a nemesis as smart as him"...
Count Nerfedalot
Terracotta Army
Posts: 1041


Reply #27 on: October 17, 2014, 07:52:38 PM

Direct mental input is FAR easier tech than the holodeck which, with it's tractor/pressor beams, transporter and replicator tech, artificial gravity and holographic projection, is PFM.  The return signal back from computer to brain is probably a lot farther out than input from brain to computer though. Strapping a helmet full of sensitive antennae on and training a computer to understand the difference between thinking "A" and thinking "B" is just around the corner, the difference between thinking "while" vs "when" is not much farther off, and then off we go into symbolic land beyond that.  After that first step, it's just a matter of miniaturizing the hardware and increasing its sensitivity to make it commercially viable.  Like I said, they really do have monkeys playing video games with their brains already.  They just need to get the hardware a little smaller and more sensitive so it doesn't require surgically implanting electrodes into the brain.  Either that or wait about two decades for nano tech to make those implants easier and safer.

Yes, I know I'm paranoid, but am I paranoid enough?
Pages: [1] Go Up Print 
f13.net  |  f13.net General Forums  |  General Discussion  |  Topic: Nifty Tech Thread  
Jump to:  

Powered by SMF 1.1.10 | SMF © 2006-2009, Simple Machines LLC