How can I write code for an optical touchscreen using infrared technology?

In summary, according to the author, a capacitive touchscreen is possible to create using infrared LEDs, but an optical touchscreen is more efficient and requires less programming. He has written code for a 3D model and is unsure if he has the coding skills to be able to code efficiently. He is also considering a third camera to increase accuracy.
  • #1
Sorade
53
1
Hello, I am looking into building a touch screen. I have been considering a capacitive touch screen but I am also interested in optical touchscreen using infrared like the ones one this link: http://www.ledsmagazine.com/articles/print/volume-10/issue-9/features/optical-touchscreens-benefit-from-compact-high-power-infrared-leds-magazine.html

Since my programming skills are extremely limited (if I feel bold), or nonexistent (if I feel like comparing myself to the average programmer out there), I was wondering if anyone has tried to write some code for the two first types of optical touchscreens in the link (see images below), or knows of any free available open source code.
1309ledsweb_design2.jpg
1309ledsweb_design3.jpg

Thanks !
 
Last edited by a moderator:
Technology news on Phys.org
  • #2
It looks like you'll need to understand some trigonometry in order to convert your scan data into x,y coordinates of the touch.

Have you thought about how to do that?
 
  • #3
I have an idea, yes. My maths isn't to bad (especially trigonometry). I've written some code to handle data from a 3D model before. My concern is that all the code I've ever written was extremely unoptimised and took ages to run (with loops in loops in loops, combined to the very basic functions). I don't think I've got the coding skills to be able to code efficiently, especially if I have to do it using an Arduino which has limited memory. That's why I'm asking. If I'm provided with a code I'll be able to understand it with a bit of research and edit it to fit my needs.
 
  • #4
I don't think someone here will be able to help you with the specifics of coding this algorithm. However, we can help you develop it if you show your work on it.

I'd start first with a diagram of the geometry. Basically you'd follow something like this:
Step 1: Read each sensor convert data to an angle measure
Step 2: Use the angle input to get the x,y positioning output

Also I'd start with 0,0 being the top left corner as a common convention for screen display with increasing going from left to right and increasing y from top to bottom.
 
  • #5
Thanks,

I'll post again once I've done a bit of work and I know for sure which method I want to use. I also have to reassess my needs, because multitouch is fun, but when using it with a Windows desktop it is not that necessary. I also need to pick the IR emitter/receiver combination I want to use (budget and all).

Thanks for the conventions though. It will make it easier for people to understand.
 
  • #6
I apologies in advance for not using the convention but I thought it would be better to keep it similar to the paper I got it from: http://www.google.co.uk/url?sa=t&rc...=TjaSiO67H9-u0xpDT6zI_A&bvm=bv.99261572,d.d24

Hi all so I plan on using the following method for my display. I might add a third camera to increase the accuracy and allow multitouch. I think that using scanning line cameras relatively low end should do the trick and avoid using a lot of wiring and connection unlike using IR-receivers.

Is it possible to output the image of the camera as a binary signal... i.e areas where the finger can be detected would output 1 and background would output 0 ? If not I will have to do some image processing which I'm not sure is an easy thing to do.

upload_2015-8-4_15-50-56.png

Figure 1: Coordinate System for Pointer Locater Using Stereovision

The original position of the pointer can be found by solving:
upload_2015-8-4_15-51-19.png


Where:
upload_2015-8-4_15-51-34.png


Where: d2x ; d2y are the coordinates of the focal point of the right camera

Dividing the original position by the pixel size of the display yields the cursor position of the pointer.
 
  • #7
I think you need to display the image and use the x,y value you computed to get the pixel color value.
 
  • #8
jedishrfu said:
I think you need to display the image and use the x,y value you computed to get the pixel color value.

I'm not clear about what you mean ? I was thinking of using a monochrome camera. Why is the pixel color value needed ? I'm just interested in its position.
 
  • #9
Sorry I thought you meant a color camera and I thought perhaps you were trying to locate something in an image.

As an example, a camera records the road in front of a car and the computer processes the image using a combination of filters to isolate and locate the road line markers for steering.
 
  • #10
Ah okay sorry I miss explained.

What I hope is to have a camera that "sees" a 1 pixel thick layer above the display. The field of view of my line scan camera is parallel to my display (side view). The idea is that after calibration the camera outputs a value of 0 for each pixel, say 00000000000000000. But when a finger touches the screen the pixel value change for some pixels, say 00135310000000000. I can therefore deduce that my finger is located where the 13531 is, i.e to the left of my image. The centre of my finger is located at the pixel with a value of 5. I can then work out the distance between the centre of my image (in the image above I've got 17 pixel, so the centre of my image is the 9th pixel , so my finger is offset by 3 pixels from the centre of my image (see N values in the sketch).
 

Related to How can I write code for an optical touchscreen using infrared technology?

1. What is an optical touch screen code?

An optical touch screen code is a type of technology that uses light to detect touch inputs on a screen. It works by projecting a grid of infrared light beams onto the screen, and when an object, such as a finger, interrupts the beams, the touch location can be determined.

2. How does an optical touch screen code work?

An optical touch screen code uses an array of infrared sensors and LEDs to create a grid of light beams on the screen. When an object, such as a finger, touches the screen, it interrupts the light beams and the sensors can determine the exact location of the touch. This information is then sent to the device's processor to execute the corresponding command.

3. What are the benefits of using an optical touch screen code?

One of the main benefits of using an optical touch screen code is its high responsiveness and accuracy. It can detect multiple touch points at once, making it ideal for multi-touch gestures. It also does not require physical contact with the screen, which can help prevent wear and tear on the device.

4. What are some common applications of optical touch screen code?

Optical touch screen codes are commonly used in various devices such as smartphones, tablets, and laptops. They are also used in public kiosks, interactive displays, and gaming consoles. Additionally, they are used in industrial and medical equipment for their precision and reliability.

5. Are there any limitations or drawbacks to using an optical touch screen code?

One limitation of optical touch screen codes is that they can be affected by external light sources, such as sunlight or bright indoor lighting. This can interfere with the accuracy of the touch detection. Additionally, they may not work well with certain types of screen protectors or gloves that block the infrared light beams.

Similar threads

Replies
2
Views
1K
  • DIY Projects
Replies
5
Views
3K
Replies
1
Views
5K
  • Programming and Computer Science
Replies
5
Views
2K
  • Materials and Chemical Engineering
Replies
4
Views
3K
  • Art, Music, History, and Linguistics
Replies
1
Views
1K
  • Programming and Computer Science
Replies
2
Views
2K
  • STEM Academic Advising
Replies
11
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
Back
Top