Migration from XNA to Mono -> Model does not contain any Meshes

Hi,

I am converting an XNA application to Monogame and I solved all the effects issues.

When I load the Model in the application, it shows Bones count properly but doesn’t have any mesh inside.

Please advise me what would be the problem?

Thanks,
Bala

Hi,

I found the problem coming from ModelProcessor.

ModelContent model = base.Process(input, context);

base.Process returns no meshes, but it has bone information.

Please suggest me a solution…

The solution would depend on your model. I can’t guess what your model contains, and it does work for other models. We need some more details about your model.

Hi Konaju,

Thanks for your reply!

None of the model is having mesh list. I am using Custom Content Processor. Is that an issue?

My Model is simple Sphere.fbx file exported from blender.

I hope you know the LightPrePass example by jcoluna. I am using his LightPrePassProcessor done on XNA

Please let me know how to convert that project into Monogame, has anybody done that before?

Thanks,
Bala

Hi,

I am getting this Exception from base.Process();

An exception of type 'Microsoft.Xna.Framework.Content.Pipeline.InvalidContentException' occurred in MonoGame.Framework.Content.Pipeline.dll but was not handled in user code

Additional information: Bad token CppNet.Token

Please help me on this issue…

Thanks and Regards,
Bala

That is an error coming from the effect preprocessing. Can you show us the shader code for the effect?

Yeah sure… here it is…

//-----------------------------------------------------------------------------
// LPPMainEffect.fx
//
// Jorge Adriano Luna 2011
// http://jcoluna.wordpress.com
//
// It uses some code from Nomal Mapping Sample found at
// http://create.msdn.com/en-US/education/catalog/sample/normal_mapping
// and also code from here
// http://aras-p.info/texts/CompactNormalStorage.html
//-----------------------------------------------------------------------------
#include "Macros.fxh"

//-----------------------------------------
// Parameters
//-----------------------------------------
float4x4 World;
float4x4 WorldView;
float4x4 View;
float4x4 Projection;
float4x4 WorldViewProjection;
float4x4 LightViewProj; //used when rendering to shadow map

float FarClip;
float2 LightBufferPixelSize;

//as we used a 0.1f scale when rendering to light buffer,
//revert it back here.
const static float LightBufferScaleInv = 10.0f;

float4 AmbientColor;

//we should use one of these 4 defines to compute ambient color:
//NO_AMBIENT -- default, you don't need to define it
//AMBIENT_COLOR --we use an external variable to modulate the diffuse color
//AMBIENT_CUBEMAP --we use a cubemap to encode ambient light information

#define AMBIENT_CUBEMAP

#ifdef ALPHA_MASKED
float AlphaReference;
#endif

//-----------------------------------------
// Textures
//-----------------------------------------
texture DiffuseMap;
sampler diffuseMapSampler = sampler_state
{
	Texture = (DiffuseMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};

texture SpecularMap;
sampler specularMapSampler = sampler_state
{
	Texture = (SpecularMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};

texture NormalMap;
sampler normalMapSampler = sampler_state
{
	Texture = (NormalMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};

texture EmissiveMap;
sampler emissiveMapSampler = sampler_state
{
	Texture = (EmissiveMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};

texture LightBuffer;
sampler2D lightSampler = sampler_state
{
	Texture = <LightBuffer>;
	MipFilter = POINT;
	MagFilter = POINT;
	MinFilter = POINT;
	AddressU = Clamp;
	AddressV = Clamp;
};


texture LightSpecularBuffer;
sampler2D lightSpecularSampler = sampler_state
{
	Texture = <LightSpecularBuffer>;
	MipFilter = NONE;
	MagFilter = POINT;
	MinFilter = POINT;
	AddressU = Clamp;
	AddressV = Clamp;
};



#ifdef DUAL_LAYER
texture SecondDiffuseMap;
sampler secondDiffuseMapSampler = sampler_state
{
	Texture = (SecondDiffuseMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};


texture SecondSpecularMap;
sampler secondSpecularMapSampler = sampler_state
{
	Texture = (SecondSpecularMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};


texture SecondNormalMap;
sampler secondNormalMapSampler = sampler_state
{
	Texture = (SecondNormalMap);
	MAGFILTER = LINEAR;
	MINFILTER = LINEAR;
	MIPFILTER = LINEAR;
	AddressU = Wrap;
	AddressV = Wrap;
};
#endif

#ifdef AMBIENT_CUBEMAP

texture AmbientCubeMap;
samplerCUBE ambientCubemapSampler = sampler_state
{
	Texture = <AmbientCubeMap>;
	MinFilter=LINEAR;
	MagFilter=LINEAR;
	MipFilter=LINEAR;
	AddressU = WRAP;
	AddressV = WRAP;
};
#endif
//-------------------------------
// Helper functions
//-------------------------------
half2 EncodeNormal (half3 n)
{
	float kScale = 1.7777;
	float2 enc;
	enc = n.xy / (n.z+1);
	enc /= kScale;
	enc = enc*0.5+0.5;
	return enc;
}

float2 PostProjectionSpaceToScreenSpace(float4 pos)
{
	float2 screenPos = pos.xy / pos.w;
	return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));
}

half3 NormalMapToSpaceNormal(half3 normalMap, float3 normal, float3 binormal, float3 tangent)
{
	normalMap = normalMap * 2 - 1;
	normalMap = half3(normal * normalMap.z + normalMap.x * tangent - normalMap.y * binormal);
	return normalMap;
}	


//-------------------------------
// Shaders
//-------------------------------

#ifdef SKINNED_MESH

#define MaxBones 60
float4x4 Bones[MaxBones];

#endif

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
    float3 Normal	: NORMAL0;
	float3 Binormal  : BINORMAL0;
	float3 Tangent  : TANGENT;
#ifdef SKINNED_MESH
    float4 BoneIndices : BLENDINDICES0;
    float4 BoneWeights : BLENDWEIGHT0;
#endif
#ifdef DUAL_LAYER
    float4 Color   : COLOR0;		
#endif
};


struct VertexShaderOutput
{
    float4 Position			: POSITION0;
    float3 TexCoord			: TEXCOORD0;
    float Depth				: TEXCOORD1;
	
    float3 Normal	: TEXCOORD2;
    float3 Tangent	: TEXCOORD3;
    float3 Binormal : TEXCOORD4; 
};

struct PixelShaderInput
{
    float4 Position			: POSITION0;
    float3 TexCoord			: TEXCOORD0;
    float Depth				: TEXCOORD1;
	
    float3 Normal	: TEXCOORD2;
    float3 Tangent	: TEXCOORD3;
    float3 Binormal : TEXCOORD4; 	
	
	//we need this to detect back bacing triangles
#ifdef ALPHA_MASKED	
	float Face : VFACE;
#endif
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output = (VertexShaderOutput)0;

	
#ifdef SKINNED_MESH
	// Blend between the weighted bone matrices.
    float4x4 skinTransform = 0;
    
    skinTransform += Bones[input.BoneIndices.x] * input.BoneWeights.x;
    skinTransform += Bones[input.BoneIndices.y] * input.BoneWeights.y;
    skinTransform += Bones[input.BoneIndices.z] * input.BoneWeights.z;
    skinTransform += Bones[input.BoneIndices.w] * input.BoneWeights.w;
	
	float4 skinPos = mul(input.Position, skinTransform);
	float3 skinNormal = mul(input.Normal, skinTransform);
	float3 skinTangent = mul(input.Tangent, skinTransform);
	float3 skinBinormal = mul(input.Binormal, skinTransform);
	
	float3 viewSpacePos = mul(skinPos, WorldView);
    output.Position = mul(skinPos, WorldViewProjection);
	
    output.TexCoord.xy = input.TexCoord; //pass the texture coordinates further
	
	//we output our normals/tangents/binormals in viewspace
    output.Normal = mul(skinNormal,WorldView); 
	output.Tangent =mul(skinTangent,WorldView); 
	output.Binormal =mul(skinBinormal,WorldView); 

#else

	float3 viewSpacePos = mul(input.Position, WorldView);
    output.Position = mul(input.Position, WorldViewProjection);
    output.TexCoord.xy = input.TexCoord; //pass the texture coordinates further

	//we output our normals/tangents/binormals in viewspace
	output.Normal = normalize(mul(input.Normal,WorldView)); 
	output.Tangent =  normalize(mul(input.Tangent,WorldView)); 
	output.Binormal =  normalize(mul(input.Binormal,WorldView)); 
#endif

#ifdef DUAL_LAYER
    output.TexCoord.z = input.Color.r;	
#endif
		
	output.Depth = viewSpacePos.z; //pass depth
    return output;
}
//render to our 2 render targets, normal and depth 
struct PixelShaderOutput
{
    float4 Normal : COLOR0;
    float4 Depth : COLOR1;
};

PixelShaderOutput PixelShaderFunction(PixelShaderInput input)
{
	PixelShaderOutput output = (PixelShaderOutput)1;   

	//if we are using alpha mask, we need to read the diffuse map	
#ifdef ALPHA_MASKED
	half4 diffuseMap = tex2D(diffuseMapSampler, input.TexCoord);
	clip(diffuseMap.a - AlphaReference);	
#elif defined(DUAL_LAYER)
	float transition = tex2D(diffuseMapSampler, input.TexCoord).a;
	transition = (1.3f*input.TexCoord.z-0.15f) - transition;		
	transition = saturate(transition*5);
#endif

	//read from our normal map
	half4 normalMap = tex2D(normalMapSampler, input.TexCoord);
		
#ifdef DUAL_LAYER
	normalMap=normalMap*transition + tex2D(secondNormalMapSampler, input.TexCoord)*(1-transition);
#endif

	half3 normalViewSpace = NormalMapToSpaceNormal(normalMap.xyz, input.Normal, input.Binormal, input.Tangent);
    
	//if we are using alpha mask, we need to invert the normal if its a back face
#ifdef ALPHA_MASKED	
	normalViewSpace = normalViewSpace * sign(input.Face);
#endif

	output.Normal.rg =  EncodeNormal (normalize(normalViewSpace));	//our encoder output in RG channels
	output.Normal.b = normalMap.a;			//our specular power goes into B channel
	output.Normal.a = 1;					//not used
	output.Depth.x = -input.Depth/ FarClip;		//output Depth in linear space, [0..1]
	
	return output;
}


struct ReconstructVertexShaderInput
{
    float4 Position  : POSITION0;
    float2 TexCoord  : TEXCOORD0;
#ifdef AMBIENT_CUBEMAP
	float3 Normal	 : NORMAL0;
#endif
#ifdef SKINNED_MESH
    float4 BoneIndices : BLENDINDICES0;
    float4 BoneWeights : BLENDWEIGHT0;
#endif
#ifdef DUAL_LAYER
    float4 Color   : COLOR0;		
#endif
};




struct ReconstructVertexShaderOutput
{
    float4 Position			: POSITION0;
    float3 TexCoord			: TEXCOORD0;
	float4 TexCoordScreenSpace : TEXCOORD1;
#ifdef AMBIENT_CUBEMAP
	float3 Normal	 : TEXCOORD2;
#endif
};

ReconstructVertexShaderOutput ReconstructVertexShaderFunction(ReconstructVertexShaderInput input)
{
    ReconstructVertexShaderOutput output=(ReconstructVertexShaderOutput)0;

#ifdef SKINNED_MESH
	// Blend between the weighted bone matrices.
    float4x4 skinTransform = 0;
    
    skinTransform += Bones[input.BoneIndices.x] * input.BoneWeights.x;
    skinTransform += Bones[input.BoneIndices.y] * input.BoneWeights.y;
    skinTransform += Bones[input.BoneIndices.z] * input.BoneWeights.z;
    skinTransform += Bones[input.BoneIndices.w] * input.BoneWeights.w;

	float4 skinPos = mul(input.Position, skinTransform);

    output.Position = mul(skinPos, WorldViewProjection);
	#ifdef AMBIENT_CUBEMAP
	float3 skinNormal = mul(input.Normal, skinTransform);
	output.Normal = normalize(mul(skinNormal,World)); 
	#endif

#else

    output.Position = mul(input.Position, WorldViewProjection);

	#ifdef AMBIENT_CUBEMAP
	output.Normal = normalize(mul(input.Normal,World)); 
	#endif
#endif

    output.TexCoord.xy = input.TexCoord; //pass the texture coordinates further
	output.TexCoordScreenSpace = output.Position;
	
#ifdef DUAL_LAYER
    output.TexCoord.z = input.Color.r;	
#endif

    return output;
}

float4 ReconstructPixelShaderFunction(ReconstructVertexShaderOutput input):COLOR0
{
	PixelShaderOutput output = (PixelShaderOutput)1;   
	// Find the screen space texture coordinate and offset it
	float2 screenPos = PostProjectionSpaceToScreenSpace(input.TexCoordScreenSpace) + LightBufferPixelSize;

	//read from our diffuse, specular and emissive maps
	half4 diffuseMap = tex2D(diffuseMapSampler, input.TexCoord);

	
#ifdef ALPHA_MASKED	
	clip(diffuseMap.a - AlphaReference);
#endif
	


	half3 emissiveMap = tex2D(emissiveMapSampler, input.TexCoord).rgb;
	half3 specularMap = tex2D(specularMapSampler, input.TexCoord).rgb;
	
#ifdef DUAL_LAYER
	float transition = (1.3f*input.TexCoord.z - 0.15f) - diffuseMap.a;		
	transition = saturate(transition*5);
	diffuseMap.rgb = diffuseMap.rgb*transition + tex2D(secondDiffuseMapSampler, input.TexCoord).rgb*(1-transition);
	specularMap = specularMap.rgb*transition + tex2D(secondSpecularMapSampler, input.TexCoord).rgb*(1-transition);
#endif

	//read our light buffer texture. Remember to multiply by our magic constant explained on the blog
	float4 lightColor =  tex2D(lightSampler, screenPos) * LightBufferScaleInv;

	//our specular intensity is stored in a separate texture
	float4 specular =  tex2D(lightSpecularSampler, screenPos) * LightBufferScaleInv;
	
	float4 finalColor = float4(diffuseMap*lightColor.rgb + specular*specularMap + emissiveMap,1);

#ifdef AMBIENT_COLOR
	//add a small constant to avoid dark areas
	finalColor.rgb+= diffuseMap*AmbientColor.rgb;
#elif defined(AMBIENT_CUBEMAP)
	//fetch ambient cubemap using vertex normal. Mayb you will want to use the per-pixel normal. In this case,
	//you should fetch the normal buffer as we do with the lightBuffer, and recompute the global-space normal
	half3 ambientCubemapColor = texCUBE(ambientCubemapSampler,input.Normal);
	finalColor.rgb += AmbientColor.rgb*ambientCubemapColor.rgb*diffuseMap.rgb;
#endif



	return finalColor;
}


struct ShadowMapVertexShaderInput
{
    float4 Position : POSITION0;	
	//if we have alpha mask, we need to use the tex coord
#ifdef ALPHA_MASKED	
    float2 TexCoord  : TEXCOORD0;
#endif
#ifdef SKINNED_MESH
    float4 BoneIndices : BLENDINDICES0;
    float4 BoneWeights : BLENDWEIGHT0;
#endif

};

struct ShadowMapVertexShaderOutput
{
    float4 Position : POSITION0;
	float2 Depth : TEXCOORD0;
#ifdef ALPHA_MASKED	
    float2 TexCoord  : TEXCOORD1;
#endif
};



ShadowMapVertexShaderOutput OutputShadowVertexShaderFunction(ShadowMapVertexShaderInput input)
{
    ShadowMapVertexShaderOutput output = (ShadowMapVertexShaderOutput)0;
	
#ifdef SKINNED_MESH
	// Blend between the weighted bone matrices.
    float4x4 skinTransform = 0;
    
    skinTransform += Bones[input.BoneIndices.x] * input.BoneWeights.x;
    skinTransform += Bones[input.BoneIndices.y] * input.BoneWeights.y;
    skinTransform += Bones[input.BoneIndices.z] * input.BoneWeights.z;
    skinTransform += Bones[input.BoneIndices.w] * input.BoneWeights.w;

	float4 skinPos = mul(input.Position, skinTransform);
    float4 clipPos = mul(skinPos, mul(World, LightViewProj));
#else
    float4 clipPos = mul(input.Position, mul(World, LightViewProj));
#endif
	//clamp to the near plane
	clipPos.z = max(clipPos.z,0);
	
	output.Position = clipPos;
	output.Depth = output.Position.zw;
	
#ifdef ALPHA_MASKED	
    output.TexCoord = input.TexCoord; //pass the texture coordinates further	
#endif
    return output;
}

float4 OutputShadowPixelShaderFunction(ShadowMapVertexShaderOutput input) : COLOR0
{
#ifdef ALPHA_MASKED	
	//read our diffuse
	half4 diffuseMap = tex2D(diffuseMapSampler, input.TexCoord);
	clip(diffuseMap.a - AlphaReference);
#endif

    float depth = input.Depth.x / input.Depth.y;	
    return float4(depth, 1, 1, 1); 
}

technique RenderToGBuffer
{
    pass RenderToGBufferPass
    {
	#ifdef ALPHA_MASKED	
		CullMode = None;
	#else
		CullMode = CCW;
	#endif

        VertexShader = compile VS_PROFILE VertexShaderFunction();
        PixelShader = compile PS_PROFILE PixelShaderFunction();
    }
}

technique ReconstructShading
{
	pass ReconstructShadingPass
    {
	#ifdef ALPHA_MASKED	
		CullMode = None;
	#else
		CullMode = CCW;
	#endif

        VertexShader = compile VS_PROFILE ReconstructVertexShaderFunction();
        PixelShader = compile PS_PROFILE ReconstructPixelShaderFunction();
    }
}

technique OutputShadow
{
	pass OutputShadowPass
	{		
	#ifdef ALPHA_MASKED	
		CullMode = None;
	#else
		CullMode = CCW;
	#endif

        VertexShader = compile VS_PROFILE OutputShadowVertexShaderFunction();
        PixelShader = compile PS_PROFILE OutputShadowPixelShaderFunction();
	}
}

Macros.fxh

#ifdef SM4

// Macros for targetting shader model 4.0 (DX11)
#define PS_PROFILE ps_4_0_level_9_3
#define VS_PROFILE vs_4_0 //4_0_level_9_3

#elif SM3

#define PS_PROFILE ps_3_0
#define VS_PROFILE vs_3_0

#else

// Macros for targetting shader model 2.0 (DX9)
#define PS_PROFILE ps_2_0
#define VS_PROFILE vs_2_0

#endif

Thanks,
Bala

Hi,

Litttle more progress,

I assigned .xnb file previously so I got BadToken.Cpp.

But when I changed it to .fx file I am getting different error in MaterialProcessor like

An exception of type 'Microsoft.Xna.Framework.Content.Pipeline.PipelineException' occurred in MonoGame.Framework.Content.Pipeline.dll but was not handled in user code

Additional information: The source file 

I think I am doing anything wrong here?

 protected override MaterialContent ConvertMaterial(MaterialContent material,
       ContentProcessorContext context)
    {

        //System.Diagnostics.Debugger.Launch();
        EffectMaterialContent lppMaterial = new EffectMaterialContent();

        OpaqueDataDictionary processorParameters = new OpaqueDataDictionary();
        processorParameters["ColorKeyColor"] = this.ColorKeyColor;
        processorParameters["ColorKeyEnabled"] = false;
        processorParameters["TextureFormat"] = this.TextureFormat;
        processorParameters["GenerateMipmaps"] = this.GenerateMipmaps;
        processorParameters["ResizeTexturesToPowerOfTwo"] = this.ResizeTexturesToPowerOfTwo;
        processorParameters["PremultiplyTextureAlpha"] = false;
        processorParameters["ColorKeyEnabled"] = false;

        //string defaultImp = @"D:\PRAPTISENSE\MonogamePOC\360Viewer\360Viewer\Content\shaders\LPPMainEffect.xnb";

        string directory = @"D:\PRAPTISENSE\MonogamePOC\360Viewer\360Viewer\Content\";
        System.Diagnostics.Debug.WriteLine("BALA ->" + directory);
        string CompeffectSrc = System.IO.Path.Combine(directory, "bin\\Windows\\shaders\\LPPMainEffect.xnb");
        string effectSrc = System.IO.Path.Combine(directory, "shaders\\LPPMainEffect.fx");

        //lppMaterial.CompiledEffect = new ExternalReference<CompiledEffectContent>(effectSrc, material.Identity);
        lppMaterial.Effect = new ExternalReference<EffectContent>(effectSrc);// null;//new ExternalReference<EffectContent>(_customFx.Length == 0 ? effectSrc /*"shaders/LPPMainEffect"*/ : _customFx);
        lppMaterial.CompiledEffect = context.BuildAsset<EffectContent, CompiledEffectContent>(lppMaterial.Effect, "EffectProcessor");
        // new ExternalReference<CompiledEffectContent>(CompeffectSrc); //
        // copy the textures in the original material to the new lpp
        // material
        ExtractTextures(lppMaterial, material);
        //extract the extra parameters
        ExtractDefines(lppMaterial, material, context);

        // and convert the material using the NormalMappingMaterialProcessor,
        // who has something special in store for the normal map.
        return context.Convert<MaterialContent, MaterialContent>
            (lppMaterial, typeof(LightPrePassMaterialProcessor).Name, processorParameters);
    }

Thanks,
Bala

Hi,

After debugging thru Monogame.Content.Pipline code,

I fixed this issue by compiling my Preprocessor with adding one custom contenWriter,

Here it is,

[ContentTypeWriter]
public class LightPrePassWriter : ContentTypeWriter<EffectMaterialContent>
{
    public override string GetRuntimeReader(TargetPlatform targetPlatform)
    {
        var type = typeof(ContentReader);
        var readerType = type.Namespace + ".EffectMaterialReader, " + type.Assembly.FullName;
        return readerType;
    }

    protected override void Write(ContentWriter output, EffectMaterialContent value)
    {
        output.WriteExternalReference(value.CompiledEffect);

        Dictionary<string, object> dict = new Dictionary<string, object>();
        foreach (KeyValuePair<string, ExternalReference<TextureContent>> item in value.Textures)
        {
            //output.WriteExternalReference(value.Textures[item.Key]);
            dict.Add(item.Key, item.Value);
        }
        output.WriteObject<Dictionary<string, object>>(dict);
    }
}

Make sure you have given absolute path for .fx and .tga references.

After doing all these I loaded my Model successfully in my game but it was looking completely black.

When I checked my custom effect was not mapped with textures.

The actual problem is I am cloning the effect object after loading the Model.

Here when I clone, it was not cloned the EffectParameters which has my Texture details. So I debugged Monogame source and found the issue in EffectParameter,cs and I fixed and now the cloning is done nicely.

Please add this fix in your next version Monogame Release.

internal EffectParameter(EffectParameter cloneSource)
{
    // Share all the immutable types.
    ParameterClass = cloneSource.ParameterClass;
    ParameterType = cloneSource.ParameterType;
    Name = cloneSource.Name;
    Semantic = cloneSource.Semantic;
    Annotations = cloneSource.Annotations;
    RowCount = cloneSource.RowCount;
    ColumnCount = cloneSource.ColumnCount;

    // Clone the mutable types.
    Elements = cloneSource.Elements.Clone();
    StructureMembers = cloneSource.StructureMembers.Clone();

    // The data is mutable, so we have to clone it.
    var array = cloneSource.Data as Array;
    if (array != null)
        Data = array.Clone();
    else //*********************** Added by BALA -> Else part was not there
        Data = cloneSource.Data;

    StateKey = unchecked(NextStateKey++);
}

Thanks and Happy Gaming,
Bala

Hi,

I found some FBX files are loading the Mesh list, its showing count 0.

I debugged the Monogame source and found the importer itself is not creating any Geometry data.

Why is it so? I have tried converting all the models to FBX2013. still nothing works.

But some fbx which are created by me from Blender are loading nicely.

And All these FBX files are working nicely in XNA.

I can share the FBX file if you want, Please help on this issue.

Thnaks,
Bala

From what others have said and my own experiences, FBX does not always load correctly. You can try loading the model in Blender and exporting as a .dae and see if that works better. Additionally you can try Autodesk’s FBX Converter tool to be sure they are valid and visible there (both as FBX or as dae).

He has already tried converting them.
I think the effort should be in finding the issues in the pipeline, and fixing them. I have problems with my fbx’es too, but haven’t had time to work on them. Instead I am using xnb’s compiled by XNA.

Thanks for your suggestions.

But I can’t use XNA compiled XNB files as I am adding cusstom effect in Preprocessor.

in XNA compiled XNBs I would have a different version of Pixel and Vertex Shader functions, right?

But I tried to load the FBX in blender and Exported it in my way. It seems its loading properly, but not rendering.

When I draw a BoundingBox of the model the BoundingBox is visible but not the actual mesh.

Need to look on it… Have tried any SkyBox in Monogame. Please help me on this…

Thanks,
Bala