Tuesday 5 March 2013

Working with UIGestureRecognizers


UIGestureRecognizers Which were Introduced in iOS 3.0, way back when it was called iPhone OS. UIGestureRecognizer is an abstract class that several concrete classes extend Eg. UITapGestureRecognizer, UIPinchGestureRecognizer. Today we are going to be building a simple photo board application. You will be able to add photos from your board, move, rotate and zoom them in and out around the board.We will also build. In some simple physics to give a sense of the photos being thrown around the board Here is a short video of what our final product will look like.

GIT Hub

You can find this project now on GitHub . Please let me know any issues you may have. Happy coding !

Creating the project

Lets get a project ready that can handle all of this functionality. Open up xCode and start a View based iPad application called Photo Demo Board. Once the projectwindow has come up go to the Framework group in the left bar and right click on it.Select Add -> Existing Framework ... A large model view will come down listing all of the frameworks that can be added to the project . Add in the Framework "Mobile Core Services". Now go into DemoPhotoBoardViewController.h and add in the line
I apologies for having to use the image there, WordPress hates any line of code with <or> in it. We might as well finish filling in the rest of the header now too. Do not worry about what these properties will be used for yet just include them in the header for the moment, they will be what we use to keep track of our scaling, movement and rotation. The add photo method will be called from a button we put into our interface in the next step.
@ Interface demo Photoboard ViewController : UIViewController   {
 
	CGFloat scale load;
	CGFloat load rotation;
 
	CGFloat firstX;
	Firsty CGFloat;
}
 
- ( IBAction ) addPhoto : ( id ) sender;
 
@ End

Filling in the XIB

The next thing we are going to do is add a tool bar and tool bar button to our XIB.Double click on DemoPhotoBoardViewController.xib. Once it has opened drag in a UIToolbar item and then put a UIToolBarButtonItem with a Flexible Space element to the left of it. Make the system UIBarButtonItem the button "Add". Now if you go to the file owner below and right click on it, you should see an outlet for the method "addPhoto". Connect this to the add button we have. As a final step, select the UIToolbar and look at its size in the inspector panel. Make sure the settings match the settings autosizing seen below Sun that things do not get screwy when the app is in other orientations.

Implementing the photo picker

Go ahead and open up DemoPhotoBoardViewController.m. The first thing we are going to do is implement the addPhoto method. Insert the following code into your main.
- ( IBAction ) addPhoto : ( id ) sender {
 
	UIImagePickerController * controller =  [ [ UIImagePickerController alloc ] init ] ;
	 [ controller setMediaTypes : [ NSArray arrayWithObject : kUTTypeImage ] ] ,
	 [ controller setDelegate : self ] ;
 
	UIPopoverController * popover =  [ [ UIPopoverController alloc ] initWithFrame content viewcontroller : controller ] ,
	 [ popover setDelegate : self ] ;
	 [ popover presentPopoverFromBarButtonItem : sender permittedArrowDirections : UIPopoverArrowDirectionUp animated : YES ] ;
 }
This method creates a UIImagePickerController and tells it to only display images.Next we create an UIPopoverController that we instantiate with our UIImagePickerController as the content viewcontroller. We set the delegate to ourself, and present it from the bar button item sender, Which refers to the add button in our interface. We know the pop over will always be below our button so we force the button direction to always be point up With this done, we can now run the app and see a UIImagePickercController appearing in a UIPopOverController below our add button.

Setting up the Gesture Recognizers

Now we need to implement the delegate method for our UIImagePickerController and add the image to our view when it is selected. We do this with the image picker controler: DidFinishPickingMediaWithInfo: delegate method. This method will Ooops us a dictionary where the key @ "UIImagePickerControllerOriginalImage" will return a UIImage object of the image the user selected. We are going to need to create an ImageView and then put this ImageView in a UIView holder. The reason we do this is because standard UIImageViews, despite being UIView subclasses, do not react to gesture recognizer added to them. I'm not exactly sure why that is but this is the solution I have found in my testing. We are going to create four different kinds UIGestureRecognizers and connect them to our holder view.
We will first create a UIPinchGestureRecognizer. This object and does not require customization, we will simply set its target to us with the scale: selector and assign this class as its delegate. With this done we add it to the holder view we created.
Next we create a UIRotationGestureRecognizer. This object does not require much customization either. We simply set it to call the rotate: method in our class and set its delegate.
Net we create the UIPanGestureRecognizer. We create the PanGestureRecognizer to make a call to the method move: upon being activated. We tell the PanGestureRecognizer that we only care when a single touch is panning by setting the maximum and minimum touches to 1st We once again add this to the holder view we created.
The final UIGestureRecognizer we create is the UITapGestureRecognizer. The UITapGestureRecognizer will be used to stop at object that has been "thrown" from going all the way to its stopping point. Essentially it will be used to catch on object while it is still moving. We set the number of taps required to 1 and set the delegate. We add this to our final UIGestureRecognizer holder view and add the view to subview. You can see the code below. Please ignore the random number WordPress is weak sauce.
 
-  ( void ) image picker controller : ( UIImagePickerController * ) picker didFinishPickingMediaWithInfo : ( NSDictionary  * ) info {
 
	UIImage * image =  [ info objectForKey : @ "UIImagePickerControllerOriginalImage" ] ;
 
	UIView * holder view =  [ [ UIView alloc ] initWithFrame : CGRectMake ( 0 , 0 , image.size.width, image.size.height ) ] ;
	UIImageView * imageView =  [ [ UIImageView alloc ] initWithFrame : [ view frame holder ] ] ,
	 [ imageView setImage : image ] ;
	 [ placeholder view addSubview : imageView ] ;
 
	UIPinchGestureRecognizer * pinchRecognizer =  [ [ UIPinchGestureRecognizer alloc ] initWithFrame target : self action : @ selector ( scale : ) ] ,
	 [ pinchRecognizer setDelegate : self ] ;
	 [ placeholder view addGestureRecognizer : pinchRecognizer ] ;
 
	UIRotationGestureRecognizer * rotationRecognizer =  [ [ UIRotationGestureRecognizer alloc ] initWithFrame target : self action : @ selector ( rotate : ) ] ,
	 [ rotationRecognizer setDelegate : self ] ;
	 [ placeholder view addGestureRecognizer : rotationRecognizer ] ;
 
	UIPanGestureRecognizer * panRecognizer =  [ [ UIPanGestureRecognizer alloc ] initWithFrame target : self action : @ selector ( move : ) ] ,
	 [ panRecognizer setMinimumNumberOfTouches : 1 ] ,
	 [ panRecognizer setMaximumNumberOfTouches : 1 ] ,
	 [ panRecognizer setDelegate : self ] ;
	 [ placeholder view addGestureRecognizer : panRecognizer ] ;
 
	UITapGestureRecognizer * tapRecognizer =  [ [ UITapGestureRecognizer alloc ] initWithFrame target : self action : @ selector ( tapped : ) ] ,
	 [ tapRecognizer setNumberOfTapsRequired : 1 ] ,
	 [ tapRecognizer setDelegate : self ] ;
	 [ placeholder view addGestureRecognizer : tapRecognizer ] ;
 
	[ self.view addSubview : view holder ] ;
 }
So lets quickly define each of these methods as search so we can see them all occurring as we touch. To object that we add to the view Add this code in and run the application, you can click around on the objects you add to the board and see the Log messages displaying in the terminal. In the terminal you may or may not be able to activate all of these, because of multitouch being simulated in a pretty limited way. But you can run the code and try this out.
- ( void ) scale : ( id ) sender { 
	NSLog ( @ "See a pinch gesture" ) ;
 }
 
- ( void ) rotate : ( id ) sender { 
	NSLog ( @ "See a rotate gesture" ) ;
 }
 
- ( void ) move : ( id ) sender { 
	NSLog ( @ "See a move gesture" ) ;
 }
 
- ( void ) tapped : ( id ) sender { 
	NSLog ( @ "See a tap gesture" ) ;
 }

UIGestureRecognizer Action Methods

All UIGestureRecognizers contain a state property of type UIGestureRecognizerState.This is because of UIGestureRecognizers calling their action methods throughout the entire time a gesture is being performed. When the gesture first begins the state ofthe calling UIGestureRecognizer is UIGestureRecognizerStateBegan, throughout its all Subsequent calls have the UIGestureRecognizerStateChanged state, and the final call is UIGestureRecognizerStateEnded of state. We can use this to our advantage to do house keeping in each gesture of our action methods. Another important thing to note about the action calls from UIGestureRecognizers is that the properties it wants to report about a gesture, as examined for scale and rotation for UIPinchGestureRecognizer UIRotationGestureRecognizer, i can say in reference to the original state of the object. Sun as a scale is happening the scale may be reported as, 1.1, 1.2, 1.3, 1.4 and 1.5 on Subsequent calls. These scales are not cumulative but all in reference to the original state of the object are attached to the UIGestureRecognizers.

Implementing Scaling

First thing we will do is implement the scale method. The scale method will be called by. At id sender, this sender will actually be a UIPinchGestureRecognizer object If we look at the documentation for a UIPinchGestureRecognizer object we will see that it includes a scale property that is a CGFloat. This scale property will be provided by the UIPinchGestureRecognizer everytime the scale: method is called. Because the scale will be cumulative and always in reference to the original state of the object.Because of this, as we make our photo grow, we must make sure that we only scale by the difference of the last scale to the current scale. For example of the first scale: the call has UIPinchGestureRecognizer scale as being 1.1 and the next call has it by 1.2, we should scale by 1.1 and then by another 1.1. To handle this we have the the class property CGFloat load scale. This will keep track of the last scale we applied to our view that Sun on the next call we can only apply the difference between them.
So now that we can tell how much to scale an item upon being pinched we need to look into the mechanism that will actually scale the view. Every UIView has a property called CGAffineTransform transform. Described this much of the geometry of the view that will be drawn. Using method provided by the Quartz frameworks we will figure out how to change this variable to scale as we need. Lets first take a look at out whole scaling method.
- ( void ) scale : ( id ) sender {
 
	[ self.view bringSubviewToFront : [ ( UIPinchGestureRecognizer * ) sender view ] ] ;
 
	if ( [ ( UIPinchGestureRecognizer * ) sender state ]  == UIGestureRecognizerStateEnded )  {
 
		load scale =  1.0 ;
		 return ;
	 }
 
	CGFloat scale =  1.0  -  ( load-scale -  [ ( UIPinchGestureRecognizer * ) sender scale ] ) ;
 
	CGAffineTransform current transform =  [ ( UIPinchGestureRecognizer * ) sender view ] transform.;
	CGAffineTransform newTransform = CGAffineTransformScale ( current transform, scale, scale ) ;
 
	[ [ ( UIPinchGestureRecognizer * ) sender view ] setTransform : newTransform ] ;
 
	scale load =  [ ( UIPinchGestureRecognizer * ) sender scale ] ;
 }
The first thing we do in this method is bring the touched view to the front. We do this by accessing the view property of our sender Which in this case is the UIPinchGesturereRecognizer. Next thing we do is check if this is the final touch in this pinch motion. If it is we reset out last touch to the first value When the scale of one is applied a view does not change. When a pinch has ended we set the final pinch as being the new starting point for the next pinch sequence specifically, 1stAny other touch besides the last will subtract the difference from the last scale and the current scale from 1st This will be the scale change between the current and the last touch. We want to apply this to the current CGAffineTransfom matrix of the view this gesture recognizer is attached to. We now get the current transform of the view and pass it into CGAffineTransformScale () method. The first parameter is for current transform and the following two are the x and y scale to be applied to the passed in transform. The output here will be the new transform for the view. We apply this and reset the scale.

Implementing rotation

Next thing we handle is rotationally. This method has a very similar structure to the scaling method. We use another class property called rotational load this time instead and a slightly different method Quartz, but the overall code should make sense. Check it out below.
- ( void ) rotate : ( id ) sender {
 
	[ self.view bringSubviewToFront : [ ( UIRotationGestureRecognizer * ) sender view ] ] ;
 
	if ( [ ( UIRotationGestureRecognizer * ) sender state ]  == UIGestureRecognizerStateEnded )  {
 
		load rotation =  0.0 ;
		 return ;
	 }
 
	CGFloat rotation =  0.0  -  ( load-rotation -  [ ( UIRotationGestureRecognizer * ) sender rotation ] ) ;
 
	CGAffineTransform current transform =  [ ( UIPinchGestureRecognizer * ) sender view ] transform.;
	CGAffineTransform newTransform = CGAffineTransformRotate ( current transform, rotation ) ;
 
	[ [ ( UIRotationGestureRecognizer * ) sender view ] setTransform : newTransform ] ;
 
	rotational load =  [ ( UIRotationGestureRecognizer * ) sender rotation ] ;
 }

Implementing Movement

Now we handle movement Which is a bit different that the rotation and scaling transformations. Although you could use the transform to move around to object using the transform property, we are going to continuously reset the center of each view. We utilize PanGestureRecognizers translationInView method to get the point Which has been moved to the view in reference to its starting point. If this is the first touch from the gesture recognizer we set our class properties firstX and Firsty.We the calculate our translated point by adding the original center points to the translated point in the view. We set the view's center with this newly calculated view. You can see the code below.
- ( void ) move : ( id ) sender {
 
	[ [ [ ( UITapGestureRecognizer * ) sender view ] layer ] removeAllAnimations ] ;
 
	[ self.view bringSubviewToFront : [ ( UIPanGestureRecognizer * ) sender view ] ] ;
	CGPoint translatedPoint =  [ ( UIPanGestureRecognizer * ) sender translationInView : self.view ] ;
 
	if ( [ ( UIPanGestureRecognizer * ) sender state ]  == UIGestureRecognizerStateBegan )  {
 
		firstX =  [ [ sender view ] center ] x.;
		Firsty =  [ [ sender view ] center ] y,.
	 }
 
	translatedPoint = CGPointMake ( firstX + translatedPoint.x, Firsty + translatedPoint.y ) ;
 
	[ [ sender view ] setCenter : translatedPoint ] ;
 
	if ( [ ( UIPanGestureRecognizer * ) sender state ]  == UIGestureRecognizerStateEnded )  {
 
		CGFloat finalX = translatedPoint.x +  ( .35 * [ ( UIPanGestureRecognizer * ) sender velocityInView : self.view ] x. ) ;
		CGFloat Finaly = translatedPoint.y +  ( .35 * [ ( UIPanGestureRecognizer * ) sender velocityInView : self.view ] y. ) ;
 
		if ( UIDeviceOrientationIsPortrait ( [ [ UIDevice current device ] orientation ] ) )  {
 
			if ( finalX & lt; 0 )  {  				 				finalX =  0 ; 			 }  			 			else  if ( finalX & gt; 768 )  {
 
				finalX =  768 ;
			 }
 
			if ( Finaly & lt; 0 )  {  				 				finaly =  0 ; 			 }  			 			else  if ( Finaly & gt; 1024 )  {
 
				Finaly =  1024 ;
			 } 
		}
 
		else  {
 
			if ( finalX & lt; 0 )  {  				 				finalX =  0 ; 			 }  			 			else  if ( finalX & gt; 1024 )  {
 
				finalX =  768 ;
			 }
 
			if ( Finaly & lt; 0 )  {  				 				finaly =  0 ; 			 }  			 			else  if ( Finaly & gt; 768 )  {
 
				Finaly =  1024 ;
			 } 
		}
 
		[ UIView beginAnimations : nil context : NULL ] ;
		 [ UIView setAnimationDuration : .35 ] ;
		 [ UIView setAnimationCurve : UIViewAnimationCurveEaseOut ] ,
		 [ [ sender view ] setCenter : CGPointMake ( finalX, finaly ) ] ;
		 [ UIView commitAnimations ] ;
	 } 
}

Implementing Momentum

The final half of the above method is for us to calculate the momentum that the object will have after being moved. This will make the object appear as if it is being thrown across a table and slowly coming to stop. In order to do this we utilize UIPanGestureRecognizers velocityInView method whichwill tell us the velocity of the pan touch within a provided view. With this we can do in easy position calculation for both the x and y coordinates of our object. To do this we must Ooops on input for time, in this case .4 seconds. While this is not truly momentum and friction based physics it provides a nice affect for our interaction. With out final resting place we calculated Ensure that where the object ends up will be closed within the visible surface of the iPad by checking against the bounds of the screen Depending on the current orientation. The final step is to animate the view moving to this final location over the same .4 second time period.

Implementing Taps

We have one final gesture recognizer implementation to do and that is the tap: method. This method will be used when a user taps on that object is in the midsts of sliding after being moved. Essentially we want to stop the movement mid slide.In order to do that we are going to tell the CALayer layer property of our view to cancel all current animations. The short piece of code can be seen below.
- ( void ) tapped : ( id ) sender {
 
	[ [ [ ( UITapGestureRecognizer * ) sender view ] layer ] removeAllAnimations ] ;
 }

Implementing UIGestureDelegate

If you run the code now you will be able to perform all of the gestures Described within this document but you will notice that you are not able to do several at the same time. For instance you can not pinch zoom and rotate a view at the sametime.This is because we still need to implement the method UIGestureRecognizerDelegate GestureRecognizer: shouldRecognizeSimultaneouslyWithGestureRecognizer. We want to make sure that any gesture recognizers can happen together except for the pan gesture recognizer. To do this we simply check that it is not a UIPanGestureRecognizerClass and return true in that case. See the short codebelow.
-  ( BOOL ) GestureRecognizer : ( UIGestureRecognizer * ) GestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer : ( UIGestureRecognizer * ) otherGestureRecognizer {
 
	return  ! [ GestureRecognizer isKindOfClass : [ UIPanGestureRecognizer class ] ] ;
 }

GIT Hub

You can find this project now on GitHub . Please let me know any issues you may have. Happy coding!

No comments:

Post a Comment