Touch screens are defined by how users interact with them using their fingers.
SwiftUI includes a comprehensive Gesture API that makes it simple to detect taps, long presses, drags, and complex multi-touch events.
You can easily bind these physical gestures to update your app's state variables.
The most common gesture is the simple tap.
While Buttons are meant for standard taps, you can make any view tappable by using the .onTapGesture modifier.
import SwiftUIstruct TapCard: View { @State private var color: Color = .blue var body: some View { RoundedRectangle(cornerRadius: 20) .fill(color) .frame(width: 200, height: 150) .onTapGesture { // Change color instantly when tapped color = color == .blue ? .red : .blue } } }
You can even specify the number of taps required, like .onTapGesture(count: 2) for a double-tap!
To detect when a user drags their finger across the screen, you use the .gesture() modifier combined with a DragGesture.
You can track the exact coordinates of the finger and move your views accordingly.
struct DraggableCircle: View {
@State private var offset = CGSize.zero
var body: some View {
Circle()
.fill(Color.orange)
.frame(width: 100, height: 100)
// Apply the offset to actually move the view
.offset(offset)
.gesture(
DragGesture()
.onChanged { gesture in
// Update state as the finger moves
offset = gesture.translation
}
.onEnded { _ in
// Snap back to center when released
withAnimation(.spring()) {
offset = .zero
}
}
)
}
}
SwiftUI includes several other powerful gestures:
LongPressGesture: Detects when the user holds their finger down for a specific duration.MagnifyGesture: Detects a two-finger pinch, perfect for zooming in on photos.RotateGesture: Detects a two-finger twist.You can even combine gestures to require them to happen simultaneously or in sequence!
Which modifier is used to easily execute code when a view is tapped, without creating a full Gesture object?