DeepfakeCapsuleGAN
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Built a seq2seq neural conversational model in PyTorch using attention with intention and a diversity promoting objective function to prevent irrelevant generic outputs’
Published:
Using StableDiffusion for text2img
background generation and MODNET/U2net
model to seamlessly superimpose the isolated foreground using Alpha-matting and apply Nvidia’s FastPhotoStyle
for an image stylization and smoothening
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Generalized Successor Representations from Neuroscience within an end-to-end deep reinforcement learning framework, comparing its efficacy to DQN on two diverse environments (Mazebase and DOOM) given raw pixel observations.’
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Built a seq2seq neural conversational model in PyTorch using attention with intention and a diversity promoting objective function to prevent irrelevant generic outputs’
Published:
Created language agnostic word embeddings via artificial code-switching to share structure across languages for any NLP task when you have less labeled data.
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Using GANS to generate images that are then used for deep fakes. Using Capsule networks instead of regular CNNs
Published:
Using StableDiffusion for text2img
background generation and MODNET/U2net
model to seamlessly superimpose the isolated foreground using Alpha-matting and apply Nvidia’s FastPhotoStyle
for an image stylization and smoothening
Published:
Created language agnostic word embeddings via artificial code-switching to share structure across languages for any NLP task when you have less labeled data.
Published:
Built a seq2seq neural conversational model in PyTorch using attention with intention and a diversity promoting objective function to prevent irrelevant generic outputs’
Published:
Created language agnostic word embeddings via artificial code-switching to share structure across languages for any NLP task when you have less labeled data.
Published:
Generalized Successor Representations from Neuroscience within an end-to-end deep reinforcement learning framework, comparing its efficacy to DQN on two diverse environments (Mazebase and DOOM) given raw pixel observations.’
Published:
Generalized Successor Representations from Neuroscience within an end-to-end deep reinforcement learning framework, comparing its efficacy to DQN on two diverse environments (Mazebase and DOOM) given raw pixel observations.’
Published:
Built a seq2seq neural conversational model in PyTorch using attention with intention and a diversity promoting objective function to prevent irrelevant generic outputs’
Published:
Using StableDiffusion for text2img
background generation and MODNET/U2net
model to seamlessly superimpose the isolated foreground using Alpha-matting and apply Nvidia’s FastPhotoStyle
for an image stylization and smoothening